<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Neuromorphic Semiconductor - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/neuromorphic-semiconductor/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Tue, 05 Dec 2023 12:35:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/become-a-semiconductor-expert-with-sk-hynix-ai-semiconductors/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 09 May 2023 06:00:34 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ICT Alliance]]></category>
		<category><![CDATA[SAPEON]]></category>
		<category><![CDATA[Neuromorphic Semiconductor]]></category>
		<category><![CDATA[We Do Future Technology]]></category>
		<category><![CDATA[AI Semiconductor]]></category>
		<category><![CDATA[Computing-in-Memory]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=11602</guid>

					<description><![CDATA[<p>﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿ AI semiconductors are ultra-fast and low-power chips that efficiently process big data and algorithms that are applied to AI services. In the video above, the exhibition “Today’s Record” shows how humans have been recording information through a variety of ways including drawing and writing for thousands of years. Today, people record information in the [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/become-a-semiconductor-expert-with-sk-hynix-ai-semiconductors/">[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><iframe src="https://www.youtube.com/embed/gOkE_BOtPx8" width="810" height="455" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span></iframe></p>
<p>AI semiconductors are ultra-fast and low-power chips that efficiently process big data and algorithms that are applied to AI services. In the video above, the exhibition “Today’s Record” shows how humans have been recording information through a variety of ways including drawing and writing for thousands of years. Today, people record information in the form of data at an ever-increasing rate. As this large volume of data is used to create new data, we call this the era of big data.</p>
<p>It is now believed that the total amount of data created up until the early 2000s can be generated in a single day. As ICT and AI technology advances and takes on a bigger role in our lives, the amount of data will only continue to grow exponentially. This is because, in addition to data recording and processing, AI technologies learn from existing data and create large amounts of new data. To process this massive volume of data, memory chips and processors need to constantly operate and work together.</p>
<p>In the Von Neumann architecture<sup>1</sup> that is commonly used for most modern computers, the processor and memory communicate through I/O<sup>2</sup> pins that are mounted on a motherboard. This creates a bottleneck when transferring data and consumes about 1,000 times more power compared to standard computing operations. Therefore, the role of memory solutions in facilitating fast and efficient data transfer is crucial for the proper function of AI semiconductors and AI services.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup><strong>1</strong></sup><strong>Von Neumann architecture</strong>: A computing structure that sequentially processes commands through three stages:  memory, CPU, and I/O device.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup><strong>2</strong></sup><strong>Input/Output (I/O): </strong>An information processing system designed to send and receive data from a computer hardware component, device, or network.</p>
<p>Ultimately, AI semiconductors need to combine the functions of a processor and memory while providing more enhanced qualities than the Von Neumann architecture. SAPEON Korea, an AI startup jointly founded by SK hynix, recently developed an AI semiconductor for data centers named after the company. The SAPEON AI processor offers a deep learning computation speed which is 1.5 times faster than that of conventional GPUs and uses 80% less power. In the future, SAPEON will be expanded to other areas like autonomous cars and mobile devices. SK hynix’s commitment to developing technologies to support AI is further highlighted by its establishment of the SK ICT Alliance alongside SK Telecom and SK Square. The alliance invests and develops in diverse ICT areas such as semiconductors and AI to secure global competitiveness. Furthermore, SK hynix is also developing a next-generation CIM<sup>3</sup> with neuromorphic semiconductor<sup>4</sup> devices.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup><strong>3</strong></sup><strong>Computing-in-memory (CIM):</strong> The next generation of intelligent memory that combines the processor and semiconductor memory on a single chip.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup><strong>4</strong></sup><strong>Neuromorphic semiconductor</strong>: A semiconductor for computing that can simultaneously compute and store like a human brain, reducing power consumption while increasing computational speed.</p>
<p>As AI technology and services continue to rapidly develop, SK hynix’s semiconductors for AI will also evolve to meet the market and consumer needs. The company’s chips will be the backbone for key AI services in the big data era and beyond.</p>
<p>&nbsp;</p>
<p><span style="color: #ffffff; background-color: #f59b57;"><strong>&lt;Other articles from this series&gt;<br />
</strong></span><span style="text-decoration: underline;"><a href="https://news.skhynix.com/become-a-semiconductor-expert-with-sk-hynix-hbm/" target="_blank" rel="noopener noreferrer">[We Do Future Technology] Become a Semiconductor Expert with SK hynix – HBM</a></span></p>
<p><span style="text-decoration: underline;"><a href="https://news.skhynix.com/become-a-semiconductor-expert-with-sk-hynix-ufs/" target="_blank" rel="noopener noreferrer">[We Do Future Technology] Become a Semiconductor Expert with SK hynix – UFS</a></span></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/become-a-semiconductor-expert-with-sk-hynix-ai-semiconductors/">[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 24 Mar 2022 07:00:24 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[AI Chip]]></category>
		<category><![CDATA[Neuromorphic Semiconductor]]></category>
		<category><![CDATA[Application]]></category>
		<category><![CDATA[Edge Computing]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=8640</guid>

					<description><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across various areas by affecting many aspects of our lives. What are the key applications of AI technology and how is it realized?</p>
<p>(Check <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/">here</a> to discover more insights from SNU professor Deog-Kyoon Jeong about AI semiconductor!)</p>
<h3 class="tit">Cloud Computing vs. Edge Computing</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" alt="" /></p>
<p class="source">Figure 1. Cloud Computing vs. Edge Computing</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>One AI application, which is an antipode to cloud services, is edge computing<sup>1</sup>. Applications that require processing massive amounts of input data such as videos or image data must process data using edge computing or transfer the data to a cloud service through wired or wireless communication preferably by reducing the amount of data. Accelerators specifically designed for edge computing for this purpose take up a huge part of AI chip design. AI chips used in autonomous driving are a good example. These chips perform image classification and object detection by processing images that contain massive amounts of data using CNN<sup>2</sup> and a series of neural operations.</p>
<h3 class="tit">AI and the Issue of Privacy</h3>
<p><!-- 이미지 롤링 swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" alt="" /></p>
<p class="source">Figure 2. Amazon’s Alexa<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/">NY Times</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" alt="" /></p>
<p class="source">Figure 2. SK Telecom’s NUGU<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nugu.co.kr/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nugu.co.kr/">SKT NUGU</a> )</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
</div>
<p><!-- btn / paging --></p>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p><!-- // 이미지 롤링 swiper start --></p>
<p>Another area of AI application is conversational services like Amazon’s Alexa or SK Telecom’s NUGU. However, such services cannot be used widely if privacy is not protected. Conversational AI services, where conversations at home are continuously eavesdropped by a microphone, cannot be developed beyond a simple recreational service by nature, and therefore, many efforts are being made to resolve these privacy issues.</p>
<p>The latest research trend in solving the privacy issue is homomorphic encryption<sup>3</sup> . Homomorphic encryption does not transmit users’ voice or other sensitive information such as medical data as is. It is a form of encryption that allows computations of multiplication and addition on encrypted data in the form of ciphertext, which only the user can decrypt, on a cloud service without first decrypting it. The outcome or results are sent to the user again in an encrypted form and only the user can decrypt to see the results. Therefore, no one including the server can see the original data other than the individual user. Homomorphic service requires an immense amount of computation up to several thousand or tens of thousand times more compared to the general plaintext DNN<sup>4</sup>service. The key area for research in the future will be around reducing the service time by dramatically enhancing computation performance through specially designed homomorphic accelerators<sup>5</sup>.</p>
<h3 class="tit">AI Chip and Memory</h3>
<p>In a large-scale DNN, the number of weights is too high to contain all of them in a processor. As a result, it has to make a read access whenever it requires a weight stored in an external large capacity DRAM and bring it to the processor. If a weight is used only once and cannot be reused after accessing it, the data that was pulled with considerable amount of energy and time consumption will be wasted. This is an extremely inefficient method as it consumes additional time and energy compared to storing and utilizing all weights in the processor. Therefore, processing an intense amount of data using enormous number of weights in large-scale DNN requires a parallel connection and/or a batch operation that uses the same weights over several times. In other words, there is a need to perform computations by connecting several processors with DRAMs in parallel to disperse and store weight or intermediate data in several DRAMs to reuse them. High speed connection among processors is essential in this structure, which is more efficient compared to having all processors access through one route. And only this structure can deliver the maximum performance.</p>
<h3 class="tit">Interconnection of AI Chips</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" alt="" /></p>
<p class="source">Figure 3. Interconnection Network of AI Chips</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The performance bottleneck that occurs when connecting numerous processors depends on the provided bandwidth, latency as well as the form of interconnection. These elements define the size and performance of the DNN. In other words, if one were to deliver ‘N-times’ higher performance by connecting ‘N’ number of accelerators in parallel, bottleneck occurs in the latency and bandwidth provided by the interconnections and will not be able to deliver the performance as one desires.</p>
<p>Therefore, the interconnection structure between a processor and another is crucial in efficiently providing the scalability of performance. In the case of NVIDIA A100 GPU, NVLink 3.0 plays that role. There are 12 NVLink channels in this GPU and each provides 50 GBps in bandwidth. Connecting 4 GPUs together can be done by direct connections using 4 channels each in the form of a clique. But to connect 16 GPUs, an NVSwitch, which is an external chip dedicated just for interconnection, is required. In the case of Google TPU v2, it is designed to enable a connection of a 2D torus structure using Inter-Core Interconnect (ICI) with an aggregate bandwidth of 496 GBps.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" alt="" /></p>
<p class="source">Figure 4. NVIDIA’s GPU Accelerator A100 using 6 HBMs<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer">The Verge</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The way in which processors are interconnected has a huge impact on the whole system. For example, if they are interconnected in a mesh or torus structure, the structure is easy to compose as the physical connection between chips is simple. But latency increases proportionally to the distance as it requires hopping over several processors to interconnect between nodes that are far away. The most extreme method would be in the form of a clique that interconnects all processors one to one, but this would lead to a significant increase in the number of chip pins by N!, causing PCB congestion beyond allowable so that in actual design, connecting up to only four processors would be the limit.</p>
<p>Most generally, using a crossbar switch like a NVSwitch is another attractive option, but this method also converges all connections on the switch. Therefore, the more the number of processors you want to interconnect, the more difficult the PCB layout becomes as transmission lines concentrate around the switch. The best method is structuring the whole network in a binary tree, connecting processors at the bottom end, and allocating the most bandwidth to the top of the binary tree. Therefore, creating a binary fat tree will be the most ideal and will be able to deliver maximum performance with scalability.</p>
<h3 class="tit">Neuromorphic AI Chip</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" alt="" /></p>
<p class="source">Figure 5. Cloud Server Processor vs. Neuromorphic AI Processor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Data representation and processing method of processors for cloud servers that serve as DNN accelerators take the form of digital, since the computational structure is fundamentally simulation of NN through software on top of hardware. Recently, there is an increase in research on neuromorphic AI chip which, unlike the previous simulation method, directly mimics the neural network of a living organism and its signals and maps to an analog electronic circuit and performs in the same manner. This method takes the form of being analog in the representation of original data in actual applications. This means that one signal is represented in one node, and the interconnection is by hardwire and not defined by the software, while the weights are stored in an analog form.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" alt="" /></p>
<p class="source">Figure 6. Previous semiconductor vs. Neuromorphic semiconductor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The advantage of such structure is that it has maximum parallelism to perform with minimum energy. And neuromorphic chips can secure great advantage in certain applications. Because the structure is fixed, it lacks programmability, but it can offer a great advantage in certain edge computing applications of a small scale. In fact, neuromorphic processor has significance in applications such as processing AI signals of sensors used in IoT by delivering high energy efficiency or image classification that requires processing large amounts of video data using CNN of a fixed weight. However, because the weight is fixed, it will be difficult to use in areas of applications that require continued learning. Also, it is difficult to leverage parallelism that interconnects several chips in parallel due to a structural limitation when it comes to large-scale computations, making its actual area of application restricted to edge computing. It is also possible to realize the neuromorphic structure in a digital form, and IBM’s TrueNorth is an example. It is known, however, that the scalability is limited, making it difficult to find wide practical applications.</p>
<h3 class="tit">Current Status of AI Chip Development</h3>
<p>To create a smart digital assistant that can converse with humans, Meta (formerly known as Facebook), which needs to process massive amounts of user data, is <a class="-as-ga" style="text-decoration: underline;" href="https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/">designing an AI chip</a> specialized to have basic knowledge about the world. The company is also internally <a class="-as-ga" style="text-decoration: underline;" href="https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip">developing AI chips</a> that will perform moderation to decide whether to post real-time videos that are uploaded to Facebook.</p>
<p>Amazon, a technology company that mainly focuses on e-commerce and cloud computing, has already developed its own AI accelerator called <a class="-as-ga" style="text-decoration: underline;" href="https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/">AWS Inferentia</a>to power its digital assistant Alexa and uses it to recognize audio signals. Cloud service provider AWS has developed an infrastructure that uses the Inferentia chip and provides services for cloud service users that can accelerate deep learning workloads like Google’s TPU.</p>
<p>Microsoft, on the other hand, <a class="-as-ga" style="text-decoration: underline;" href="https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html">uses field programmable gate array (FPGA) in its data centers</a> and has introduced a method of securing the best performance by reconfiguring precision and DNN structure according to application algorithms in order to create AI chips optimized not only in current applications, but also in future applications. This method, however, creates a lot of overhead to refigure the structure and logic circuit even if it has identified an optimal structure. As a result, it is unclear that it will have actual benefit because it is inevitably disadvantaged in terms of energy and performance compared to ASIC chips specifically designed for certain purposes.</p>
<p>A number of fabless startups are competing against NVIDIA by developing general-purpose programmable accelerators that are not specialized to certain areas of application. Many companies, including Cerebras Systems, Graphcore, and Groq, are joining the fierce competition. In Korea, SK Telecom, in collaboration with SK hynix, has developed SAPEON and will soon be used as the AI chip in data centers. And Furiosa AI is preparing to commercialize its silicon chip, Warboy, as well.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" alt="" /></p>
<p class="source">Figure 7. SAPEON X220<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.sktelecom.com/en/press/press_detail.do?idx=1492" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.sktelecom.com/en/press/press_detail.do?idx=1492">SK Telecom Press Release</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<h3 class="tit">The Importance of the Compiler</h3>
<p>The performance of such AI hardware depends greatly on how optimized its software is. Operating thousands or tens of thousands of computational circuits at the same time through systolic array and gathering the outcome efficiently require highly advanced coordination. Setting up the order of the input data to feed numerous computational circuits in the AI chip and make them to work continuously in a lockstep and then transmitting the output to the next stage can only be done through a specialized library. This means that developing an efficient library and the compiler to use them is as important as designing the hardware.</p>
<p>NVIDIA GPU started as a graphics engine. But NVIDIA provided a development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://developer.nvidia.com/cuda-toolkit" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://developer.nvidia.com/cuda-toolkit">CUDA</a>, , to enable users to write programs easily and enabled them to run efficiently on the GPU, which made it popularly and commonly used across the AI community. Google also provides its own development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://www.tensorflow.org/learn" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.tensorflow.org/learn">TensorFlow</a>, to help develop software using TPUs. As a result, it supports users to utilize TPU easily. More and more diverse development environments must be provided in the future, which will increase the applicability of AI chips.</p>
<h3 class="tit">AI Chip and its Energy Consumption</h3>
<p>TThe direction of AI services in the future must absolutely focus on enhancing the quality of service and reducing the required energy consumption. Therefore, it is expected that efforts will focus around reducing power consumption of AI chips and accelerating the development of energy-saving DNN structure. In fact, it is known that it takes 10^19 floating-point arithmetic in the training of ImageNet to reduce error rate to less than 5%. This is the equivalent to the amount of energy consumed by New York City citizens for a month. In the example of <a class="-as-ga" style="text-decoration: underline;" href="https://deepmind.com/research/case-studies/alphago-the-story-so-far" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://deepmind.com/research/case-studies/alphago-the-story-so-far">AlphaGo</a> that was used in the game of Go against 9-Dan professional player Lee Sedol in 2016, <a class="-as-ga" style="text-decoration: underline;" href="https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3">a total of 1,202 CPUs and 176 GPUs were used</a> in the inference to play Go and estimated 1 MW in power consumption, which is tremendous compared with the human brain using only 20 W.</p>
<p>AlphaGo Zero, which was developed later, became a system of a performance that exceeds AlphaGo merely after 72 hours of training using self-play reinforcement learning with only 4 TPUs. This case proves that there is potential in reducing energy consumption using a new neural network structure and a learning method. And we must continue to pursue research and development on energy-saving DNN structures.</p>
<h3 class="tit">The Future of the AI Semiconductor Market</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" alt="" /></p>
<p class="source">Figure 8. AI Chip Market Outlook<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/">Statista</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The successful accomplishments made in the field of AI will expand the scope of application, triggering stunning market growth as well. For example, SK hynix recently developed a next-generation intelligence semiconductor memory, or processing-in-memory (PIM)<sup>6</sup>, to resolve the bottleneck issue in data access in AI and big data processing. SK hynix unveiled the <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/">‘GDDR6-AiM (Accelerator in Memory)’ sample</a> as the first product to apply the PIM, and announced the achievement of its PIM development at the International Solid-State Circuits Conference, ISSCC 2022<sup>7</sup>, an international conference of the highest authority in the field of semiconductor, held in San Francisco in the end of February this year.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" alt="" /></p>
<p class="source">Figure 9. GDDR6-AiM developed by SK hynix</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Application systems will further drive a wider AI market and continuously create new areas, enabling differentiated service quality backed by the quality of inference based on a structure of neural network. AI semiconductors, which are the backbone of the AI system, will be differentiated based on how fast and accurately they can conduct inference and training tasks using low energy. Latest research findings show that energy efficiency per se is extremely poor. Therefore, there is an increasing need for research on new neural network structures with a focus not only on function, but also on energy efficiency. And in terms of hardware, the core element that defines energy efficiency lies around improving memory access methods. As such, Processing-In-Memory (PIM), which processes within a memory and not by accessing memory separately, and neuromorphic computing that mimics the neural network by storing synapse weights in analog memory will become important fields of research.</p>
<p><!-- 각주 스타일 --></p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p><!--<strong>[Reference]</strong>--></p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data.<br />
<sup>2</sup>Convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.<br />
<sup>3</sup>Homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrypting it.<br />
<sup>4</sup>A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.<br />
<sup>5</sup>An accelerator is a special-purpose hardware made using processing and computation chips.<br />
<sup>6</sup>Processing in memory (PIM, sometimes called processor in memory) is the next-generation technology that provides a solution for data congestion issues for AI and big data by adding computational functions to semiconductor memory. The product based on such technology is sometimes known as a PIM chip.<br />
<sup>7</sup>The International Solid-State Circuits Conference was held virtually from Feb. 20 to Feb. 28 this year with a theme of “Intelligent Silicon for a Sustainable World.</p>
<p><!-- //각주 스타일 --></p>
<p><!-- 기고문 스타일 --></p>
<p><!-- namecard --></p>
<div class="namecard">
<p><img decoding="async" class="alignnone size-full wp-image-3446" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/02/18062629/Dong_kyoon_Jeong.png" alt="" /></p>
<div class="name">
<p class="tit">By<strong>Deog-kyoon Jeong, Ph.D.</strong></p>
<p><span class="sub">Professor<br />
Electrical &amp; Computer Engineering<br />
Seoul National University(SNU) College of Engineering<br />
</span></p>
</div>
</div>
<p><!-- //기고문 스타일 --></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
