<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Application - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/application/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Wed, 18 May 2022 23:49:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>SK hynix Flaunts Its Latest Solutions for Server Applications at Intel Vision</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-flaunts-its-latest-server-based-products-at-intel-vision/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 12 May 2022 00:00:40 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[solution]]></category>
		<category><![CDATA[Application]]></category>
		<category><![CDATA[IntelVision]]></category>
		<category><![CDATA[Server]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=9088</guid>

					<description><![CDATA[<p>Global DRAM leader SK hynix exhibited at the Intel Vision conference from May 10-11, introducing the latest memory solutions for server applications, including DDR5 DIMM alongside its next-generation solutions such as Processing in Memory (PiM) and Compute Express Link (CXL).  As part of the Intel® ON Series, Intel Vision is a newly envisioned ICT conference [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-flaunts-its-latest-server-based-products-at-intel-vision/">SK hynix Flaunts Its Latest Solutions for Server Applications at Intel Vision</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><span class="TextRun SCXW95417780 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle-linked-defn="{&quot;ObjectId&quot;:&quot;d8f017bb-7685-4447-b091-a0b1727a981c|24&quot;,&quot;ClassId&quot;:1073872969,&quot;Properties&quot;:[469775450,&quot;스타일4 Char&quot;,201340122,&quot;1&quot;,134233614,&quot;true&quot;,469778129,&quot;4Char&quot;,335572020,&quot;99&quot;,134231262,&quot;true&quot;,469777841,&quot;Malgun Gothic&quot;,469777842,&quot;Times New Roman&quot;,469777843,&quot;Malgun Gothic&quot;,469777844,&quot;Malgun Gothic&quot;,469769226,&quot;Malgun Gothic,Times New Roman&quot;,268442635,&quot;24&quot;,335559704,&quot;1033&quot;,469777929,&quot;스타일4&quot;,469778324,&quot;Default Paragraph Font&quot;]}" data-ccp-parastyle-defn="{&quot;ObjectId&quot;:&quot;d8f017bb-7685-4447-b091-a0b1727a981c|22&quot;,&quot;ClassId&quot;:1073872969,&quot;Properties&quot;:[469775450,&quot;스타일4&quot;,201340122,&quot;2&quot;,134234082,&quot;true&quot;,134233614,&quot;true&quot;,469778129,&quot;4&quot;,335572020,&quot;99&quot;,134224900,&quot;false&quot;,469777841,&quot;Malgun Gothic&quot;,469777842,&quot;Times New Roman&quot;,469777843,&quot;Malgun Gothic&quot;,469777844,&quot;Malgun Gothic&quot;,469769226,&quot;Malgun Gothic,Times New Roman&quot;,268442635,&quot;24&quot;,335559704,&quot;1033&quot;,335559740,&quot;360&quot;,201341983,&quot;1&quot;,335559739,&quot;80&quot;,335559738,&quot;80&quot;,335551550,&quot;1&quot;,335551620,&quot;1&quot;,335559682,&quot;1&quot;,335559683,&quot;3&quot;,134245417,&quot;true&quot;,469777929,&quot;스타일4 Char&quot;,469778324,&quot;스타일3&quot;,469778325,&quot;[\&quot;스타일5\&quot;]&quot;]}" data-ccp-parastyle="스타일4">Global DRAM leader SK </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">h</span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">ynix</span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4"> exhibited at the Intel Vision conference from May 10-11, introducing the latest </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">memory </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">solutions</span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4"> for server applications</span></span><span class="TextRun SCXW95417780 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">,</span></span><span class="TextRun SCXW95417780 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4"> including DDR5 DIMM </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">alongside its </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">next-generation </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">solutions such as </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">Processing in Memory (</span><span class="SpellingError SCXW95417780 BCX0" data-ccp-parastyle="스타일4">PiM</span></span><span class="TextRun SCXW95417780 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">)</span></span><span class="TextRun SCXW95417780 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4"> and </span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">Compute Express Link (</span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">CXL</span></span><span class="TextRun SCXW95417780 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">)</span><span class="NormalTextRun SCXW95417780 BCX0" data-ccp-parastyle="스타일4">.</span></span><span class="EOP SCXW95417780 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:1,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559738&quot;:80,&quot;335559739&quot;:80,&quot;335559740&quot;:240}"> </span></p>
<p><span class="TextRun SCXW85355910 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW85355910 BCX0">As part of the <a class="-as-ga" style="text-decoration: underline;" href="https://www.intel.com/content/www/us/en/events/on-event-series/innovation.html" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.intel.com/content/www/us/en/events/on-event-series/innovation.html">Intel® ON</a> Series, </span><span class="NormalTextRun SCXW85355910 BCX0">Intel Vision is </span><span class="NormalTextRun SCXW85355910 BCX0">a newly envisioned ICT conference and exhibition </span><span class="NormalTextRun SCXW85355910 BCX0">being held this year</span><span class="NormalTextRun SCXW85355910 BCX0"> for the first time. </span><span class="NormalTextRun SCXW85355910 BCX0">Decision makers from major </span><span class="NormalTextRun SCXW85355910 BCX0">players </span><span class="NormalTextRun SCXW85355910 BCX0">in the technology field, as well as renowned industry opinion leaders</span></span><span class="TextRun SCXW85355910 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW85355910 BCX0">,</span></span><span class="TextRun SCXW85355910 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW85355910 BCX0"> were invited to the event</span><span class="NormalTextRun SCXW85355910 BCX0"> featuring the latest innovations and technologies from Intel and its partners.</span></span><span class="EOP SCXW85355910 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span><br />
<img loading="lazy" decoding="async" class="alignnone size-full wp-image-9103" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093336/SKhynix_1000x600_0511_1.png" alt="" width="1000" height="600" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093336/SKhynix_1000x600_0511_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093336/SKhynix_1000x600_0511_1-667x400.png 667w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093336/SKhynix_1000x600_0511_1-768x461.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Image 1. View of SK hynix’s booth at Intel Vision</p>
<p><span data-ccp-props="{&quot;201341983&quot;:1,&quot;335551550&quot;:2,&quot;335551620&quot;:2,&quot;335559685&quot;:160,&quot;335559738&quot;:80,&quot;335559739&quot;:80,&quot;335559740&quot;:360,&quot;335559795&quot;:80}"> </span></p>
<p><span class="TextRun SCXW254244152 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW254244152 BCX0">SK hynix is a key player in the memory field</span><span class="NormalTextRun SCXW254244152 BCX0"> and has a long-standing partnership with Intel</span></span><span class="TextRun SCXW254244152 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW254244152 BCX0">. </span></span><span class="TextRun SCXW254244152 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW254244152 BCX0">T</span><span class="NormalTextRun SCXW254244152 BCX0">h</span><span class="NormalTextRun SCXW254244152 BCX0">e</span></span> <span class="TextRun SCXW254244152 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW254244152 BCX0">solid </span><span class="NormalTextRun SCXW254244152 BCX0">relationship was on full display at the </span><span class="NormalTextRun SCXW254244152 BCX0">hybrid </span><span class="NormalTextRun SCXW254244152 BCX0">online and offline </span><span class="NormalTextRun SCXW254244152 BCX0">event</span><span class="NormalTextRun SCXW254244152 BCX0"> where SK hynix was an invited guest.</span></span><span class="EOP SCXW254244152 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span data-contrast="auto">At its booth, SK hynix</span> <span data-contrast="auto">presented its <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-launches-worlds-first-ddr5-dram/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-launches-worlds-first-ddr5-dram/">DDR5<sup>1)</sup>DRAM, developed in October 2020 as the first of its kind in the world.</a><br />
</span><span data-contrast="auto"> The company continued its dominance as a leader in DRAM technology by releasing the </span><a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-becomes-the-industrys-first-to-ship-24gb-ddr5-samples/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-becomes-the-industrys-first-to-ship-24gb-ddr5-samples/">industry’s largest density 24 Gb (gigabit) DDR5 product sample</a><span data-contrast="auto"> in December 2021</span><span data-contrast="auto">.</span><span data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span data-contrast="none"> <span class="TextRun SCXW184641113 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0">DDR5 </span><span class="NormalTextRun SCXW184641113 BCX0">allows for </span><span class="NormalTextRun SCXW184641113 BCX0">high-speed processes with</span><span class="NormalTextRun SCXW184641113 BCX0"> bandwidth speeds at least </span><span class="NormalTextRun SCXW184641113 BCX0">50</span></span><span class="TextRun SCXW184641113 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0">%</span></span><span class="TextRun SCXW184641113 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0"> faster than DDR4</span><span class="NormalTextRun SCXW184641113 BCX0"> and </span><span class="NormalTextRun SCXW184641113 BCX0">can support </span><span class="NormalTextRun SCXW184641113 BCX0">256 GB of high density using TSV technology.</span><span class="NormalTextRun SCXW184641113 BCX0"> It also prov</span><span class="NormalTextRun SCXW184641113 BCX0">es more trustworthy by self-correcting </span><span class="NormalTextRun SCXW184641113 BCX0">errors in units of 1 bit </span><span class="NormalTextRun SCXW184641113 BCX0">with a built-in Error Correcting Code (ECC). Systems using SK </span><span class="SpellingError SCXW184641113 BCX0">hynix’s</span><span class="NormalTextRun SCXW184641113 BCX0"> DDR5</span><span class="NormalTextRun SCXW184641113 BCX0"> are expected to see </span><span class="NormalTextRun SCXW184641113 BCX0">reliability</span></span> <span class="TextRun SCXW184641113 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0">improve</span><span class="NormalTextRun SCXW184641113 BCX0"> by</span></span> <span class="TextRun SCXW184641113 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0">roughly 20 times</span></span><span class="TextRun SCXW184641113 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW184641113 BCX0">.</span></span><span class="EOP SCXW184641113 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></span></p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093501/SKhynix_1000x600_0511_2.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093452/SKhynix_1000x600_0511_3.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093442/SKhynix_1000x600_0511_4.png" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">Image 2. SK hynix&#8217;s latest memory solutions facing server applications presented at Intel Vision</p>
<p>&nbsp;</p>
<p><span data-contrast="auto">These features allow for more stable and seamless usage in big data processes</span> <span data-contrast="auto">like cloud computing, artificial intelligence (AI), and machine learning (ML), as well as the metaverse.</span><span data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span data-contrast="auto">It’s also the most environmentally beneficial DDR product to date with</span> <span data-contrast="auto">a low operating voltage of 1.1 V, reducing electricity consumption by 20%. Along with the premium memory HBM3</span><span data-contrast="auto"><sup>2)</sup></span><span data-contrast="auto">, these products will continue to carry the load from a total cost of ownership (TCO</span><span data-contrast="auto">)</span><span data-contrast="auto"> standpoint.</span><span data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span class="TextRun SCXW112416857 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0">SK </span><span class="NormalTextRun SCXW112416857 BCX0">hynix</span><span class="NormalTextRun SCXW112416857 BCX0"> also introduced</span><span class="NormalTextRun SCXW112416857 BCX0"> its GDDR6-AiM</span></span><span class="TextRun SCXW112416857 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0">, </span></span><span class="TextRun SCXW112416857 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0">the</span><span class="NormalTextRun SCXW112416857 BCX0"> latest</span></span> <span class="TextRun SCXW112416857 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="SpellingError SCXW112416857 BCX0">PiM</span></span><span class="TextRun SCXW112416857 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0"><sup>3) </sup></span></span><span class="TextRun SCXW112416857 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0">solution</span><span class="NormalTextRun SCXW112416857 BCX0"> at SK </span><span class="NormalTextRun SCXW112416857 BCX0">hynix</span></span><span class="TextRun SCXW112416857 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0">,</span></span><span class="TextRun SCXW112416857 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW112416857 BCX0"> and Comput</span><span class="NormalTextRun SCXW112416857 BCX0">e</span><span class="NormalTextRun SCXW112416857 BCX0"> Express Link (CXL) capabilities.</span></span><span class="EOP SCXW112416857 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/">GDDR6-AiM</a> was first unveiled at 2022 ISSCC<span class="NormalTextRun SCXW20356941 BCX0"><sup>4) </sup>in San Francisco in early 2022. It </span><span class="NormalTextRun SCXW20356941 BCX0">allows for </span><span class="NormalTextRun SCXW20356941 BCX0">computational functions to </span><span class="NormalTextRun SCXW20356941 BCX0">be added to </span><span class="NormalTextRun SCXW20356941 BCX0">memory chips</span><span class="NormalTextRun SCXW20356941 BCX0">. W</span><span class="NormalTextRun SCXW20356941 BCX0">hen </span><span class="NormalTextRun SCXW20356941 BCX0">combined </span><span class="NormalTextRun SCXW20356941 BCX0">with CPU/GPU</span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">, </span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">GDDR6-AiM can improve the overall processing </span><span class="NormalTextRun SCXW20356941 BCX0">speeds </span><span class="NormalTextRun SCXW20356941 BCX0">by </span><span class="NormalTextRun SCXW20356941 BCX0">up to </span><span class="NormalTextRun SCXW20356941 BCX0">16 times</span></span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">. </span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">The </span><span class="NormalTextRun SCXW20356941 BCX0">next</span></span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">&#8211;</span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">generation intelligent memory chi</span><span class="NormalTextRun SCXW20356941 BCX0">p </span><span class="NormalTextRun SCXW20356941 BCX0">can be used where </span><span class="NormalTextRun SCXW20356941 BCX0">fast</span><span class="NormalTextRun SCXW20356941 BCX0"> computations are needed</span></span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">,</span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0"> like machine learning and </span><span class="NormalTextRun SCXW20356941 BCX0">high</span></span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">&#8211;</span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">performance computing (</span><span class="NormalTextRun SCXW20356941 BCX0">HPC</span></span><span class="TextRun SCXW20356941 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0">)</span><span class="NormalTextRun SCXW20356941 BCX0">.</span></span><span class="TextRun SCXW20356941 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW20356941 BCX0"> It reduce</span><span class="NormalTextRun SCXW20356941 BCX0">s</span><span class="NormalTextRun SCXW20356941 BCX0"> power</span><span class="NormalTextRun SCXW20356941 BCX0"> consumption in the CPU/GPU by reducing data </span><span class="NormalTextRun SCXW20356941 BCX0">transfer</span><span class="NormalTextRun SCXW20356941 BCX0">, thereby lowering energy usage by approximately 80%</span><span class="NormalTextRun SCXW20356941 BCX0"> compared to previous products</span><span class="NormalTextRun SCXW20356941 BCX0">. That in turn</span><span class="NormalTextRun SCXW20356941 BCX0"> is expected to make it</span><span class="NormalTextRun SCXW20356941 BCX0"> more</span><span class="NormalTextRun SCXW20356941 BCX0"> eff</span><span class="NormalTextRun SCXW20356941 BCX0">ective</span><span class="NormalTextRun SCXW20356941 BCX0"> in lowering carbon emissions.</span></span><span class="EOP SCXW20356941 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><span data-contrast="auto">CXL<sup>5) </sup>is a new, <span class="TextRun SCXW46282402 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW46282402 BCX0">up-and-coming interface solution</span></span> <span class="TextRun SCXW46282402 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW46282402 BCX0">that is </span><span class="NormalTextRun SCXW46282402 BCX0">expected to </span><span class="NormalTextRun SCXW46282402 BCX0">contribute </span><span class="NormalTextRun SCXW46282402 BCX0">to </span><span class="NormalTextRun SCXW46282402 BCX0">expand</span><span class="NormalTextRun SCXW46282402 BCX0">ed</span><span class="NormalTextRun SCXW46282402 BCX0"> memory performance</span><span class="NormalTextRun SCXW46282402 BCX0"> and </span><span class="NormalTextRun SCXW46282402 BCX0">enhanced </span><span class="NormalTextRun SCXW46282402 BCX0">speeds.</span></span><span class="EOP SCXW46282402 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></span></p>
<p><span class="TextRun SCXW140756410 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">“P</span><span class="NormalTextRun SCXW140756410 BCX0">articipating at </span><span class="NormalTextRun SCXW140756410 BCX0">Intel Vision</span><span class="NormalTextRun SCXW140756410 BCX0"> further solidifie</span><span class="NormalTextRun SCXW140756410 BCX0">d</span></span> <span class="TextRun SCXW140756410 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">our </span><span class="NormalTextRun SCXW140756410 BCX0">partnership with Intel</span></span><span class="TextRun SCXW140756410 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">,”</span></span><span class="TextRun SCXW140756410 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0"> said </span><span class="SpellingError SCXW140756410 BCX0">Sungsoo</span><span class="NormalTextRun SCXW140756410 BCX0"> Ryu, Head of DRAM Product Planning &amp; Enabling at </span><span class="NormalTextRun SCXW140756410 BCX0">SK </span><span class="NormalTextRun SCXW140756410 BCX0">hynix</span></span><span class="TextRun SCXW140756410 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">.</span> </span><span class="TextRun SCXW140756410 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">“SK hynix </span><span class="NormalTextRun SCXW140756410 BCX0">plans to continue to strengthen its competitiveness in providing total memory solutions </span><span class="NormalTextRun SCXW140756410 BCX0">from </span><span class="NormalTextRun SCXW140756410 BCX0">datacenter memory</span></span><span class="TextRun SCXW140756410 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">,</span> </span><span class="TextRun SCXW140756410 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">like DDR5 and CXL memory, </span><span class="NormalTextRun SCXW140756410 BCX0">to </span><span class="NormalTextRun SCXW140756410 BCX0">memory solutions facing client devices</span></span><span class="TextRun SCXW140756410 BCX0" lang="KO-KR" xml:lang="KO-KR" data-contrast="auto"><span class="NormalTextRun SCXW140756410 BCX0">.</span><span class="NormalTextRun SCXW140756410 BCX0">”</span></span><span class="EOP SCXW140756410 BCX0" data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-9107" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093544/SKhynix_1000x600_0511_5.png" alt="" width="1000" height="600" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093544/SKhynix_1000x600_0511_5.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093544/SKhynix_1000x600_0511_5-667x400.png 667w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/05/11093544/SKhynix_1000x600_0511_5-768x461.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Image 3. Miniature of M16 Fab at SK hynix Icheon Campus</p>
<p>&nbsp;</p>
<p><span data-contrast="auto">By participating in the Intel Vision event, SK hynix</span> <span data-contrast="auto">raised expectations for future endeavors by further committing to R&amp;D in the memory industry and solidifying its cooperation and partnership with Intel.</span><span data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559739&quot;:160,&quot;335559740&quot;:240}"> </span></p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p>&nbsp;</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>DDR (Double Data Rate): DRAM standard specification defined by the Joint Electron Device Engineering Council (JEDEC). DDR5 is the next-generation DRAM standard set to replace the current DDR4.<br />
<sup>2</sup>HBM (High Bandwidth Memory): Following previous generations of HBM, HBM2, and HBM2E, HBM3 is an upgrade to HBM2 specifications with increased bandwidth and capacities.<br />
<sup>3</sup>PiM (Processing in Memory): Next-generation technology that provides a solution for data congestion issues for AI and big data by adding computational functions to semiconductor memory.<br />
<sup>4</sup>ISSCC: The International Solid-State Circuits Conference was held virtually from February 20-24, 2022, under the theme, “Intelligent Silicon for a Sustainable World.”<br />
<sup>5</sup>CXL (Compute Express Link) Memory: Heterogeneous computing memory interface that is different from existing DDRx interface. CXL interface can realize memories such as bandwidth &amp; capacity expansion memory, persistent memory, and pooled memory. Major players in datacenter industry ecosystem are currently participating in the CXL consortium.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-flaunts-its-latest-server-based-products-at-intel-vision/">SK hynix Flaunts Its Latest Solutions for Server Applications at Intel Vision</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 24 Mar 2022 07:00:24 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[AI Chip]]></category>
		<category><![CDATA[Neuromorphic Semiconductor]]></category>
		<category><![CDATA[Application]]></category>
		<category><![CDATA[Edge Computing]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=8640</guid>

					<description><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across various areas by affecting many aspects of our lives. What are the key applications of AI technology and how is it realized?</p>
<p>(Check <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/">here</a> to discover more insights from SNU professor Deog-Kyoon Jeong about AI semiconductor!)</p>
<h3 class="tit">Cloud Computing vs. Edge Computing</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" alt="" /></p>
<p class="source">Figure 1. Cloud Computing vs. Edge Computing</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>One AI application, which is an antipode to cloud services, is edge computing<sup>1</sup>. Applications that require processing massive amounts of input data such as videos or image data must process data using edge computing or transfer the data to a cloud service through wired or wireless communication preferably by reducing the amount of data. Accelerators specifically designed for edge computing for this purpose take up a huge part of AI chip design. AI chips used in autonomous driving are a good example. These chips perform image classification and object detection by processing images that contain massive amounts of data using CNN<sup>2</sup> and a series of neural operations.</p>
<h3 class="tit">AI and the Issue of Privacy</h3>
<p><!-- 이미지 롤링 swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" alt="" /></p>
<p class="source">Figure 2. Amazon’s Alexa<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/">NY Times</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" alt="" /></p>
<p class="source">Figure 2. SK Telecom’s NUGU<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nugu.co.kr/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nugu.co.kr/">SKT NUGU</a> )</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
</div>
<p><!-- btn / paging --></p>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p><!-- // 이미지 롤링 swiper start --></p>
<p>Another area of AI application is conversational services like Amazon’s Alexa or SK Telecom’s NUGU. However, such services cannot be used widely if privacy is not protected. Conversational AI services, where conversations at home are continuously eavesdropped by a microphone, cannot be developed beyond a simple recreational service by nature, and therefore, many efforts are being made to resolve these privacy issues.</p>
<p>The latest research trend in solving the privacy issue is homomorphic encryption<sup>3</sup> . Homomorphic encryption does not transmit users’ voice or other sensitive information such as medical data as is. It is a form of encryption that allows computations of multiplication and addition on encrypted data in the form of ciphertext, which only the user can decrypt, on a cloud service without first decrypting it. The outcome or results are sent to the user again in an encrypted form and only the user can decrypt to see the results. Therefore, no one including the server can see the original data other than the individual user. Homomorphic service requires an immense amount of computation up to several thousand or tens of thousand times more compared to the general plaintext DNN<sup>4</sup>service. The key area for research in the future will be around reducing the service time by dramatically enhancing computation performance through specially designed homomorphic accelerators<sup>5</sup>.</p>
<h3 class="tit">AI Chip and Memory</h3>
<p>In a large-scale DNN, the number of weights is too high to contain all of them in a processor. As a result, it has to make a read access whenever it requires a weight stored in an external large capacity DRAM and bring it to the processor. If a weight is used only once and cannot be reused after accessing it, the data that was pulled with considerable amount of energy and time consumption will be wasted. This is an extremely inefficient method as it consumes additional time and energy compared to storing and utilizing all weights in the processor. Therefore, processing an intense amount of data using enormous number of weights in large-scale DNN requires a parallel connection and/or a batch operation that uses the same weights over several times. In other words, there is a need to perform computations by connecting several processors with DRAMs in parallel to disperse and store weight or intermediate data in several DRAMs to reuse them. High speed connection among processors is essential in this structure, which is more efficient compared to having all processors access through one route. And only this structure can deliver the maximum performance.</p>
<h3 class="tit">Interconnection of AI Chips</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" alt="" /></p>
<p class="source">Figure 3. Interconnection Network of AI Chips</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The performance bottleneck that occurs when connecting numerous processors depends on the provided bandwidth, latency as well as the form of interconnection. These elements define the size and performance of the DNN. In other words, if one were to deliver ‘N-times’ higher performance by connecting ‘N’ number of accelerators in parallel, bottleneck occurs in the latency and bandwidth provided by the interconnections and will not be able to deliver the performance as one desires.</p>
<p>Therefore, the interconnection structure between a processor and another is crucial in efficiently providing the scalability of performance. In the case of NVIDIA A100 GPU, NVLink 3.0 plays that role. There are 12 NVLink channels in this GPU and each provides 50 GBps in bandwidth. Connecting 4 GPUs together can be done by direct connections using 4 channels each in the form of a clique. But to connect 16 GPUs, an NVSwitch, which is an external chip dedicated just for interconnection, is required. In the case of Google TPU v2, it is designed to enable a connection of a 2D torus structure using Inter-Core Interconnect (ICI) with an aggregate bandwidth of 496 GBps.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" alt="" /></p>
<p class="source">Figure 4. NVIDIA’s GPU Accelerator A100 using 6 HBMs<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer">The Verge</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The way in which processors are interconnected has a huge impact on the whole system. For example, if they are interconnected in a mesh or torus structure, the structure is easy to compose as the physical connection between chips is simple. But latency increases proportionally to the distance as it requires hopping over several processors to interconnect between nodes that are far away. The most extreme method would be in the form of a clique that interconnects all processors one to one, but this would lead to a significant increase in the number of chip pins by N!, causing PCB congestion beyond allowable so that in actual design, connecting up to only four processors would be the limit.</p>
<p>Most generally, using a crossbar switch like a NVSwitch is another attractive option, but this method also converges all connections on the switch. Therefore, the more the number of processors you want to interconnect, the more difficult the PCB layout becomes as transmission lines concentrate around the switch. The best method is structuring the whole network in a binary tree, connecting processors at the bottom end, and allocating the most bandwidth to the top of the binary tree. Therefore, creating a binary fat tree will be the most ideal and will be able to deliver maximum performance with scalability.</p>
<h3 class="tit">Neuromorphic AI Chip</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" alt="" /></p>
<p class="source">Figure 5. Cloud Server Processor vs. Neuromorphic AI Processor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Data representation and processing method of processors for cloud servers that serve as DNN accelerators take the form of digital, since the computational structure is fundamentally simulation of NN through software on top of hardware. Recently, there is an increase in research on neuromorphic AI chip which, unlike the previous simulation method, directly mimics the neural network of a living organism and its signals and maps to an analog electronic circuit and performs in the same manner. This method takes the form of being analog in the representation of original data in actual applications. This means that one signal is represented in one node, and the interconnection is by hardwire and not defined by the software, while the weights are stored in an analog form.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" alt="" /></p>
<p class="source">Figure 6. Previous semiconductor vs. Neuromorphic semiconductor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The advantage of such structure is that it has maximum parallelism to perform with minimum energy. And neuromorphic chips can secure great advantage in certain applications. Because the structure is fixed, it lacks programmability, but it can offer a great advantage in certain edge computing applications of a small scale. In fact, neuromorphic processor has significance in applications such as processing AI signals of sensors used in IoT by delivering high energy efficiency or image classification that requires processing large amounts of video data using CNN of a fixed weight. However, because the weight is fixed, it will be difficult to use in areas of applications that require continued learning. Also, it is difficult to leverage parallelism that interconnects several chips in parallel due to a structural limitation when it comes to large-scale computations, making its actual area of application restricted to edge computing. It is also possible to realize the neuromorphic structure in a digital form, and IBM’s TrueNorth is an example. It is known, however, that the scalability is limited, making it difficult to find wide practical applications.</p>
<h3 class="tit">Current Status of AI Chip Development</h3>
<p>To create a smart digital assistant that can converse with humans, Meta (formerly known as Facebook), which needs to process massive amounts of user data, is <a class="-as-ga" style="text-decoration: underline;" href="https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/">designing an AI chip</a> specialized to have basic knowledge about the world. The company is also internally <a class="-as-ga" style="text-decoration: underline;" href="https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip">developing AI chips</a> that will perform moderation to decide whether to post real-time videos that are uploaded to Facebook.</p>
<p>Amazon, a technology company that mainly focuses on e-commerce and cloud computing, has already developed its own AI accelerator called <a class="-as-ga" style="text-decoration: underline;" href="https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/">AWS Inferentia</a>to power its digital assistant Alexa and uses it to recognize audio signals. Cloud service provider AWS has developed an infrastructure that uses the Inferentia chip and provides services for cloud service users that can accelerate deep learning workloads like Google’s TPU.</p>
<p>Microsoft, on the other hand, <a class="-as-ga" style="text-decoration: underline;" href="https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html">uses field programmable gate array (FPGA) in its data centers</a> and has introduced a method of securing the best performance by reconfiguring precision and DNN structure according to application algorithms in order to create AI chips optimized not only in current applications, but also in future applications. This method, however, creates a lot of overhead to refigure the structure and logic circuit even if it has identified an optimal structure. As a result, it is unclear that it will have actual benefit because it is inevitably disadvantaged in terms of energy and performance compared to ASIC chips specifically designed for certain purposes.</p>
<p>A number of fabless startups are competing against NVIDIA by developing general-purpose programmable accelerators that are not specialized to certain areas of application. Many companies, including Cerebras Systems, Graphcore, and Groq, are joining the fierce competition. In Korea, SK Telecom, in collaboration with SK hynix, has developed SAPEON and will soon be used as the AI chip in data centers. And Furiosa AI is preparing to commercialize its silicon chip, Warboy, as well.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" alt="" /></p>
<p class="source">Figure 7. SAPEON X220<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.sktelecom.com/en/press/press_detail.do?idx=1492" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.sktelecom.com/en/press/press_detail.do?idx=1492">SK Telecom Press Release</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<h3 class="tit">The Importance of the Compiler</h3>
<p>The performance of such AI hardware depends greatly on how optimized its software is. Operating thousands or tens of thousands of computational circuits at the same time through systolic array and gathering the outcome efficiently require highly advanced coordination. Setting up the order of the input data to feed numerous computational circuits in the AI chip and make them to work continuously in a lockstep and then transmitting the output to the next stage can only be done through a specialized library. This means that developing an efficient library and the compiler to use them is as important as designing the hardware.</p>
<p>NVIDIA GPU started as a graphics engine. But NVIDIA provided a development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://developer.nvidia.com/cuda-toolkit" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://developer.nvidia.com/cuda-toolkit">CUDA</a>, , to enable users to write programs easily and enabled them to run efficiently on the GPU, which made it popularly and commonly used across the AI community. Google also provides its own development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://www.tensorflow.org/learn" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.tensorflow.org/learn">TensorFlow</a>, to help develop software using TPUs. As a result, it supports users to utilize TPU easily. More and more diverse development environments must be provided in the future, which will increase the applicability of AI chips.</p>
<h3 class="tit">AI Chip and its Energy Consumption</h3>
<p>TThe direction of AI services in the future must absolutely focus on enhancing the quality of service and reducing the required energy consumption. Therefore, it is expected that efforts will focus around reducing power consumption of AI chips and accelerating the development of energy-saving DNN structure. In fact, it is known that it takes 10^19 floating-point arithmetic in the training of ImageNet to reduce error rate to less than 5%. This is the equivalent to the amount of energy consumed by New York City citizens for a month. In the example of <a class="-as-ga" style="text-decoration: underline;" href="https://deepmind.com/research/case-studies/alphago-the-story-so-far" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://deepmind.com/research/case-studies/alphago-the-story-so-far">AlphaGo</a> that was used in the game of Go against 9-Dan professional player Lee Sedol in 2016, <a class="-as-ga" style="text-decoration: underline;" href="https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3">a total of 1,202 CPUs and 176 GPUs were used</a> in the inference to play Go and estimated 1 MW in power consumption, which is tremendous compared with the human brain using only 20 W.</p>
<p>AlphaGo Zero, which was developed later, became a system of a performance that exceeds AlphaGo merely after 72 hours of training using self-play reinforcement learning with only 4 TPUs. This case proves that there is potential in reducing energy consumption using a new neural network structure and a learning method. And we must continue to pursue research and development on energy-saving DNN structures.</p>
<h3 class="tit">The Future of the AI Semiconductor Market</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" alt="" /></p>
<p class="source">Figure 8. AI Chip Market Outlook<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/">Statista</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The successful accomplishments made in the field of AI will expand the scope of application, triggering stunning market growth as well. For example, SK hynix recently developed a next-generation intelligence semiconductor memory, or processing-in-memory (PIM)<sup>6</sup>, to resolve the bottleneck issue in data access in AI and big data processing. SK hynix unveiled the <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/">‘GDDR6-AiM (Accelerator in Memory)’ sample</a> as the first product to apply the PIM, and announced the achievement of its PIM development at the International Solid-State Circuits Conference, ISSCC 2022<sup>7</sup>, an international conference of the highest authority in the field of semiconductor, held in San Francisco in the end of February this year.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" alt="" /></p>
<p class="source">Figure 9. GDDR6-AiM developed by SK hynix</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Application systems will further drive a wider AI market and continuously create new areas, enabling differentiated service quality backed by the quality of inference based on a structure of neural network. AI semiconductors, which are the backbone of the AI system, will be differentiated based on how fast and accurately they can conduct inference and training tasks using low energy. Latest research findings show that energy efficiency per se is extremely poor. Therefore, there is an increasing need for research on new neural network structures with a focus not only on function, but also on energy efficiency. And in terms of hardware, the core element that defines energy efficiency lies around improving memory access methods. As such, Processing-In-Memory (PIM), which processes within a memory and not by accessing memory separately, and neuromorphic computing that mimics the neural network by storing synapse weights in analog memory will become important fields of research.</p>
<p><!-- 각주 스타일 --></p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p><!--<strong>[Reference]</strong>--></p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data.<br />
<sup>2</sup>Convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.<br />
<sup>3</sup>Homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrypting it.<br />
<sup>4</sup>A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.<br />
<sup>5</sup>An accelerator is a special-purpose hardware made using processing and computation chips.<br />
<sup>6</sup>Processing in memory (PIM, sometimes called processor in memory) is the next-generation technology that provides a solution for data congestion issues for AI and big data by adding computational functions to semiconductor memory. The product based on such technology is sometimes known as a PIM chip.<br />
<sup>7</sup>The International Solid-State Circuits Conference was held virtually from Feb. 20 to Feb. 28 this year with a theme of “Intelligent Silicon for a Sustainable World.</p>
<p><!-- //각주 스타일 --></p>
<p><!-- 기고문 스타일 --></p>
<p><!-- namecard --></p>
<div class="namecard">
<p><img decoding="async" class="alignnone size-full wp-image-3446" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/02/18062629/Dong_kyoon_Jeong.png" alt="" /></p>
<div class="name">
<p class="tit">By<strong>Deog-kyoon Jeong, Ph.D.</strong></p>
<p><span class="sub">Professor<br />
Electrical &amp; Computer Engineering<br />
Seoul National University(SNU) College of Engineering<br />
</span></p>
</div>
</div>
<p><!-- //기고문 스타일 --></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
