<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>GDDR6-AiM - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/gddr6-aim/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Thu, 24 Oct 2024 07:12:35 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>SK hynix Presents Upgraded AiMX Solution at AI Hardware &#038; Edge AI Summit 2024</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 13 Sep 2024 06:00:11 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[AI Hardware & Edge AI Summit]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=15762</guid>

					<description><![CDATA[<p>A glimpse of SK hynix’s booth at the AI Hardware &#38; Edge AI Summit 2024 &#160; SK hynix unveiled an enhanced Accelerator-in-Memory based Accelerator (AiMX) card at the AI Hardware &#38; Edge AI Summit 2024 held September 9–12 in San Jose, California. Organized annually by Kisaco Research, the summit brings together representatives from the AI [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/">SK hynix Presents Upgraded AiMX Solution at AI Hardware & Edge AI Summit 2024</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15763 size-full" title="A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01.png" alt="A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024</p>
<p>&nbsp;</p>
<p>SK hynix unveiled an enhanced Accelerator-in-Memory based Accelerator (AiMX) card at the AI Hardware &amp; Edge AI Summit 2024 held September 9–12 in San Jose, California. Organized annually by Kisaco Research, the summit brings together representatives from the AI and machine learning ecosystem to share industry breakthroughs and developments. This year’s event focused on exploring cost and energy efficiency across the entire technology stack.</p>
<p>Marking its fourth appearance at the summit, SK hynix highlighted how its AiM<sup>1</sup> products can boost AI performance across data centers and edge devices<sup>2</sup>.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Accelerator in Memory (AiM)</strong>: SK hynix’s PIM semiconductor product name, which includes GDDR6-AiM.<br />
<sup>2</sup><strong>Edge device</strong>: Hardware that controls the flow of data at the boundary between two networks. While they fulfill numerous roles, edge devices essentially serve as the entry or exit point to a network.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15764 size-full" title="Attendees gather to learn more about the upgraded AimX card" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02.png" alt="Attendees gather to learn more about the upgraded AimX card" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Attendees gather to learn more about the upgraded AimX card</p>
<p>&nbsp;</p>
<h3 class="tit">Booth Highlights: Meet the Upgraded AiMX</h3>
<p>In the AI era, high-performance memory products are vital for the smooth operation of LLMs<sup>3</sup>. However, as these LLMs are trained on increasingly larger datasets and continue to expand, there is a growing need for more efficient solutions. SK hynix addresses this demand with its PIM<sup>4</sup> product AiMX, an AI accelerator card that combines multiple GDDR6-AiMs to provide high bandwidth and outstanding energy efficiency.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup><strong>Large language model (LLM)</strong>: Advanced AI systems that require extensive datasets to train models to understand and generate human-like language. It enables applications like natural language processing and translation.<br />
<sup>4</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that embeds processing capabilities within memory, minimizing data transfer between the processor and memory. This boosts efficiency and speed, especially for data-intensive tasks like LLMs, where quick data access and processing are essential.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15765 size-full" title="The 32 GB AiMX prototype card was shown publicly for the first time at the event" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03.png" alt="The 32 GB AiMX prototype card was shown publicly for the first time at the event" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">The 32 GB AiMX prototype card was shown publicly for the first time at the event</p>
<p>&nbsp;</p>
<p>At the AI Hardware &amp; Edge AI Summit 2024, SK hynix presented its updated 32 GB AiMX prototype which offers double the capacity of the original card featured at last year’s event. To highlight the new AiMX’s advanced processing capabilities in a multi-batch<sup>5</sup> environment, SK hynix held a demonstration of the prototype card with the Llama 3<sup>6</sup> 70B model, an open source LLM. In particular, the demonstration underlined AiMX’s ability to serve as a highly effective attention<sup>7</sup> accelerator in data centers.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Multi-batch</strong>: A computer processing method in which the system groups together multiple tasks (batches) and processes them at once.<br />
<sup>6</sup><strong>Llama 3</strong>: An open source LLM developed by Meta, featuring pretrained and instruction-fine-tuned language models.<br />
<sup>7</sup><strong>Attention</strong>: Mechanisms which give LLMs context about text, lessening the model’s chance of misunderstandings and allowing it to generate more accurate and contextually relevant outputs.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084340/SK-hynix_AI-HW-Edge-AI-Summit_04.png" alt="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084357/SK-hynix_AI-HW-Edge-AI-Summit_05.png" alt="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" width="1000" height="666" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities</p>
<p>&nbsp;</p>
<p>AiMX addresses the cost, performance, and power consumption challenges associated with LLMs in not only data centers, but also in edge devices and on-device AI applications. For example, when applied to mobile on-device AI applications, AiMX improves LLM speed three-fold compared to mobile DRAM while maintaining the same power consumption.</p>
<h3 class="tit">Featured Presentation: Accelerating LLM Services from Data Centers to Edge Devices​</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005139/SK-hynix_AI-HW-Edge-AI-Summit_06.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005151/SK-hynix_AI-HW-Edge-AI-Summit_07.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005202/SK-hynix_AI-HW-Edge-AI-Summit_08.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Euicheol Lim presenting on how the AiMX system accelerates LLM services</p>
<p>&nbsp;</p>
<p>On the final day of the summit, SK hynix gave a presentation detailing how AiMX is an optimal solution for accelerating LLM services in data centers and edge devices. Euicheol Lim, research fellow and head of the Solution Advanced Technology team, shared the company’s plans to develop AiM products for on-device AI based on mobile DRAM and revealed the future vision for AiM. In closing, Lim emphasized the importance of close collaboration with companies involved in developing and managing data centers and edge systems to further advance AiMX products.</p>
<h3 class="tit">Looking Ahead: SK hynix’s Vision for AiMX in the AI Era</h3>
<p>The AI Hardware &amp; Edge AI Summit 2024 provided a platform for SK hynix to demonstrate AiMX’s applications in LLMs across data centers and edge devices. As a low-power, high-speed memory solution able to handle large amounts of data, AiMX is set to play a key role in the advancement of LLMs and AI applications.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/">SK hynix Presents Upgraded AiMX Solution at AI Hardware & Edge AI Summit 2024</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Displays Next-Gen Solutions Set to Unlock AI &#038; More at OCP Global Summit 2023</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-displays-next-gen-solutions-to-unlock-ai-at-ocp-global-summit-2023/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 20 Oct 2023 00:00:27 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[Open Computer Project]]></category>
		<category><![CDATA[AiM]]></category>
		<category><![CDATA[LPDDR CAMM]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Computer Memory Solution]]></category>
		<category><![CDATA[PCIe Gen5]]></category>
		<category><![CDATA[OCP Global Summit]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[eSSD]]></category>
		<category><![CDATA[HBM]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=13178</guid>

					<description><![CDATA[<p>&#160; SK hynix showcased its next-generation memory semiconductor technologies and solutions at the OCP Global Summit 2023 held in San Jose, California from October 17–19. The OCP Global Summit is an annual event hosted by the world’s largest data center technology community, the Open Compute Project (OCP), where industry experts gather to share various technologies [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-displays-next-gen-solutions-to-unlock-ai-at-ocp-global-summit-2023/">SK hynix Displays Next-Gen Solutions Set to Unlock AI & More at OCP Global Summit 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="aligncenter wp-image-13179 size-full" title="SK hynix Displays Next-Gen Solutions Set to Unlock AI &amp; More at OCP Global Summit 2023" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19041417/SK-hynix_OCP-Global-Summit-2023_01.png" alt="SK hynix Displays Next-Gen Solutions Set to Unlock AI &amp; More at OCP Global Summit 2023" width="1000" height="1072" /></p>
<p>&nbsp;</p>
<p>SK hynix showcased its next-generation memory semiconductor technologies and solutions at the OCP Global Summit 2023 held in San Jose, California from October 17–19.</p>
<p>The OCP Global Summit is an annual event hosted by the world’s largest data center technology community, the <a href="https://www.opencompute.org/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">Open Compute Project (OCP)</span></a>, where industry experts gather to share various technologies and visions. This year, SK hynix and its subsidiary Solidigm showcased advanced semiconductor memory products that will lead the AI era under the slogan “United Through Technology”.</p>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13190 size-full" title="SK hynix’s exhibition booth at the OCP Global Summit 2023" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055131/SK-hynix_OCP-Global-Summit-2023_image_02.png" alt="SK hynix’s exhibition booth at the OCP Global Summit 2023" width="1000" height="1072" /></p>
<p class="source">▲ Figure 1. SK hynix’s exhibition booth at the OCP Global Summit 2023</p>
<p>&nbsp;</p>
<h3 class="tit">At the Booth: Leading Global AI Memory Technologies in the Spotlight</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide"></div>
</div>
</div>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13191 size-full" title="HBM(HBM3/HBM3E), MCR DIMM, DDR5 RDIMM, and LPDDR CAMM products on display at SK hynix's booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055149/SK-hynix_OCP-Global-Summit-2023_image_03.png" alt="HBM(HBM3/HBM3E), MCR DIMM, DDR5 RDIMM, and LPDDR CAMM products on display at SK hynix's booth" width="1000" height="1072" /></p>
<p class="source">▲ Figure 2. HBM(HBM3/HBM3E), MCR DIMM, DDR5 RDIMM, and LPDDR CAMM products on display at SK hynix&#8217;s booth</p>
<p>&nbsp;</p>
<p>SK hynix presented a broad range of its solutions at the summit, including its leading HBM<sup>1</sup>(HBM3/3E), CXL<sup>2</sup>, and AiM<sup>3</sup> products for generative AI. The company also unveiled some of the latest additions to its product portfolio including its DDR5 RDIMM, MCR DIMM, enterprise SSD (eSSD), and LPDDR CAMM<sup>4</sup> devices.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><em style="font-weight: bold;">High Bandwidth Memory (HBM)</em><em>: A high-value, high-performance product that possesses much higher data processing speeds compared to existing DRAMs by vertically connecting multiple DRAMs with through-silicon via (TSV).<br />
</em><sup>2</sup><em style="font-weight: bold;">Compute Express Link (CXL)</em><em>: A next-generation memory solution that increases the memory bandwidth of server systems with traditional DRAM products to improve performance and easily expand memory capacity.<br />
</em><sup>3</sup><em style="font-weight: bold;">Accelerator in Memory (AiM)</em><em>: SK hynix’s PIM semiconductor product name, which includes GDDR6-AiM.<br />
</em><sup>4</sup><em style="font-weight: bold;">Low Power Double Data Rate Compression Attached Memory Module (LPDDR CAMM)</em>: <em>A solution </em><em>developed based on the LPDDR package in line with the next-generation memory standard (CAMM) for laptops and mobile devices. Compared to conventional So-DIMM modules, LPDDR CAMM has a single-sided configuration which is half as thick and offers improved capacity and power efficiency.</em></p>
<p>Visitors to the HBM exhibit could see HBM3, which is utilized in NVIDIA’s H100, a high-performance GPU for AI, and also check out <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-worlds-best-performing-hbm3e/" target="_blank" rel="noopener noreferrer">the next-generation HBM3E</a></span>. Due to their low-power consumption and ultra-high-performance, these HBM solutions are more eco-friendly and are particularly suitable for power-hungry AI server systems.</p>
<p>&nbsp;</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055206/SK-hynix_OCP-Global-Summit-2023_image_04.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="CXL-based CMS 2.0, pooled memory, and memory expander solutions being demonstrated at the event" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055245/SK-hynix_OCP-Global-Summit-2023_image_06.png" alt="CXL-based CMS 2.0, pooled memory, and memory expander solutions being demonstrated at the event" width="1600" height="1072" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">▲ Figure 3. CXL-based CMS 2.0, pooled memory, and memory expander solutions being demonstrated at the event</p>
<p>&nbsp;</p>
<p>Three of SK hynix’s key CXL products were demonstrated at the event, including its CXL-based <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-introduces-industrys-first-cxl-based-cms-at-the-ocp-global-summit/" target="_blank" rel="noopener noreferrer">computational memory solution (CMS)</a></span><sup>5</sup> 2.0. A next-generation memory solution that integrates computational functions into CXL memory, CMS 2.0 uses NMP<sup>6</sup> to minimize data movement between the CPU and memory. CMS 2.0 was applied to SK Telecom’s spatial data analysis and visualization solution based on Lightning DB which analyzes foot traffic in real time, illustrating CMS 2.0’s data processing power equivalent to a server CPU. Moreover, the demonstration highlighted that CMS 2.0 can improve data processing performance and energy efficiency.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><em style="font-weight: bold;">Computational Memory Solution (CMS):</em>  <em>A </em><em>memory solution that offers the functions of machine learning and data filtering, which are frequently performed by big data analytics applications. Just like CXL, CMS’s memory capacity is highly scalable.<br />
</em><sup>6</sup><em style="font-weight: bold;">Near-Memory Processing (NMP):</em><em> A memory architecture which </em><em>moves the compute capability next to the main memory to address CPU memory bottlenecks and improve processing performance.</em></p>
<p>The company also demonstrated its CXL-based pooled memory solution which can significantly reduce idle memory usage and shorten the overhead time of data movement in distributed processing environments such as AI and big data. The demonstration, which featured a technological collaboration with software provider MemVerge, showcased how CXL-based pooled memory solutions can improve system performance in AI and big data distributed processing systems.</p>
<p>Additionally, SK hynix demonstrated its CXL-based memory expander applied to Meta’s software caching engine, CacheLib, which showed how the CXL-based memory solution optimizes software for improved performance and cost savings.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="PIM and AiMX on display at the summit" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055302/SK-hynix_OCP-Global-Summit-2023_image_07.png" alt="PIM and AiMX on display at the summit" width="1600" height="1072" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="PIM and AiMX on display at the summit" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055319/SK-hynix_OCP-Global-Summit-2023_image_08.png" alt="PIM and AiMX on display at the summit" width="1600" height="1072" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">▲ Figure 4. PIM and AiMX on display at the summit</p>
<p>&nbsp;</p>
<p>AiM, a game-changing technology that redefines memory&#8217;s role in AI inference, was also presented at the booth. A subset of SK hynix&#8217;s PIM<sup>7</sup> portfolio, AiM promises to revolutionize machine learning, high-performance computing, and big data applications while reducing operating costs. The company demonstrated its <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer">GDDR6-AiM</a></span>, a solution which incorporates computational capabilities into the memory and improves the performance and efficiency of data-intensive generative AI inference systems. Additionally, a demonstration was held for the <a href="https://news.skhynix.com/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">prototype of AiMX</span></a><sup>8</sup>, an innovative generative AI accelerator card based on GDDR6-AiM.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>7</sup><em style="font-weight: bold;">Processing-In-Memory (PIM)</em><em>: An advanced technology that combines computational capabilities and memory within just one die to perform the best computing capability.<br />
</em><sup>8</sup><strong><em>Accelerator-in-Memory based Accelerator (AiMX)</em></strong>: <em>SK hynix’s accelerator card featuring a GDDR6-AiM chip which is specialized for large language models.</em></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13184 size-full" title="PS1010 E3.S, a PCIe Gen5 eSSD product, showcased at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19054956/SK-hynix_OCP-Global-Summit-2023_image_09.png" alt="PS1010 E3.S, a PCIe Gen5 eSSD product, showcased at the booth" width="1000" height="1072" /></p>
<p class="source">▲ Figure 5. PS1010 E3.S, a PCIe Gen5 eSSD product, showcased at the booth</p>
<p>&nbsp;</p>
<p>SK hynix also showcased the PS1010 E3.S, a PCIe (Peripheral Component Interconnect Express) Gen5-based eSSD. Designed for data-intensive applications, cloud computing, and AI workloads, the PS1010 E3.S promises improved performance and reliability with superior speed and lower carbon emissions compared to previous generations.</p>
<h3 class="tit">At the Sessions: Sharing Industry Insights &amp; Key Technologies</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, holds a session on CXL technology" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055011/SK-hynix_OCP-Global-Summit-2023_image_10.png" alt="SK hynix’s Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, holds a session on CXL technology" width="1600" height="1072" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, holds a session on CXL technology" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055028/SK-hynix_OCP-Global-Summit-2023_image_11.png" alt="SK hynix’s Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, holds a session on CXL technology" width="1600" height="1072" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">▲ Figure 6. SK hynix’s Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, holds a session on CXL technology</p>
<p>&nbsp;</p>
<p>Through thought-provoking talks and sessions at the summit, SK hynix also suggested the future development of next-generation memory solutions.</p>
<p>In a panel session titled “Data Central Computing, Present and Future”, Hoshik Kim, fellow of the System Architecture Group in Memory Forest x&amp;D, joined other industry experts to omputing and programming models. Kim also held an executive session in which he revealed SK hynix’s progress in developing CXL technology and shared the company’s vision under the theme of “CXL: A Prelude to a Memory-Centric Computing”.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Youngpyo Joo of Memory Forest x&amp;D giving talks on CXL-based solutions" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055041/SK-hynix_OCP-Global-Summit-2023_image_12.png" alt="SK hynix’s Youngpyo Joo of Memory Forest x&amp;D giving talks on CXL-based solutions" width="1600" height="1072" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Dongwuk Moon of Memory Forest x&amp;D giving talks on CXL-based solutions" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055055/SK-hynix_OCP-Global-Summit-2023_image_13.png" alt="SK hynix’s Dongwuk Moon of Memory Forest x&amp;D giving talks on CXL-based solutions" width="1600" height="1072" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Jungmin Choi of Memory Forest x&amp;D giving talks on CXL-based solutions" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19234058/SK-hynix_OCP-Global-Summit-2023_Image_16.png" alt="SK hynix’s Jungmin Choi of Memory Forest x&amp;D giving talks on CXL-based solutions" width="1000" height="670" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">▲ Figure 7. SK hynix’s Youngpyo Joo, Dongwuk Moon and Jungmin Choi of Memory Forest x&amp;D giving talks on CXL-based solutions</p>
<p>&nbsp;</p>
<p>The potential of CXL was also the focal point of three other sessions held by the company’s employees from the Memory x&amp;D department. Youngpyo Joo, head of the Software Solutions Group, explored how CXL-based computational memory solution architecture can offer increased usability and flexibility.</p>
<p>Meanwhile, Donguk Moon, technical leader of the Platform Software team, introduced how emerging CXL devices can enhance memory capacity and bandwidth at the caching layer for technologies such as AI and web services. In addition, Jungmin Choi, technical leader of the Composable Memory team, covered how the latest CXL technology will further support the disaggregation of memory. Choi also emphasized that this technology can not only solve problems currently faced by data centers such as idle memory, but also improve system performance.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13189 size-full" title="Euicheol Lim, head of SK hynix’s Memory Solution Product Design Group, discusses PIM-based AiM and CIM technologies" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/10/19055109/SK-hynix_OCP-Global-Summit-2023_image_14.png" alt="Euicheol Lim, head of SK hynix’s Memory Solution Product Design Group, discusses PIM-based AiM and CIM technologies" width="1000" height="1072" /></p>
<p class="source">▲ Figure 8. Euicheol Lim, head of SK hynix’s Memory Solution Product Design Group, discusses PIM-based AiM and PIM technologies</p>
<p>&nbsp;</p>
<p>In another engaging session, Euicheol Lim, head of the Memory Solution Product Design Group, explored the potential for PIM technology to meet the high memory and computational demands of AI. Lim highlighted key technologies based on PIM, including SK hynix’s PIM-based AiM accelerator.</p>
<h3 class="tit">Advanced Solutions to Unlock Next-Generation Technologies</h3>
<p>As the OCP Global Summit 2023 drew to a close, SK hynix reinforced its commitment to developing groundbreaking solutions which can help realize advanced technologies such as AI. Going forward, the company will continue to tackle industry challenges and make technological breakthroughs for the AI era as the global no.1 AI memory solution provider.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-displays-next-gen-solutions-to-unlock-ai-at-ocp-global-summit-2023/">SK hynix Displays Next-Gen Solutions Set to Unlock AI & More at OCP Global Summit 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card &#8216;AiMX&#8217; for Generative AI</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 18 Sep 2023 00:00:48 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[AI Hardware & Edge AI Summit]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Generative AI accelerator]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=12888</guid>

					<description><![CDATA[<p>SK hynix unveiled and demonstrated a prototype of AiMX1, a generative AI accelerator2 card based on GDDR6-AiM, at the AI Hardware &#38; Edge AI Summit 2023 held September 12–14 at the Santa Clara Marriott, California. 1Accelerator-in-Memory based Accelerator (AiMX): SK hynix&#8217;s accelerator card product that specializes in large language models (AI that learns with large [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/">SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card ‘AiMX’ for Generative AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix unveiled and demonstrated a prototype of AiMX<sup>1</sup>, a generative AI accelerator<sup>2</sup> card based on GDDR6-AiM, at the AI Hardware &amp; Edge AI Summit 2023 held September 12–14 at the Santa Clara Marriott, California.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Accelerator-in-Memory based Accelerator (AiMX)</strong>: SK hynix&#8217;s accelerator card product that specializes in large language models (AI that learns with large amounts of text data such as ChatGPT) using GDDR6-AiM chips.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>2</sup><strong>Accelerator</strong>: A special-purpose hardware device that uses a chip designed specifically for processing and computing information.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094216/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_01.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094227/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_02.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094237/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_03.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094247/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_04.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 1. SK hynix&#8217;s exhibition booth at the AI Hardware &amp; Edge AI Summit 2023</p>
<p>&nbsp;</p>
<p>Hosted annually by the UK marketing firm Kisaco Research, the AI Hardware &amp; Edge AI Summit brings together global IT companies and high-profile startups to share their developments in artificial intelligence and machine learning. This is SK hynix’s third time participating in the summit.</p>
<p>At the event, the company showcased the prototype of AiMX, an accelerator card that combines multiple <a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">GDDR6-AiMs</span></a> to further enhance performance, along with the GDDR6-AIM itself under the slogan of &#8220;Boost Your AI: Discover the Power of PIM<sup>3</sup> with SK hynix&#8217;s AiM<sup>4</sup>.&#8221;</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that adds computational capabilities to semiconductor memories to solve the problem of data movement congestion in AI and big data processing.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>4</sup><strong>Accelerator in Memory (AiM)</strong>: SK hynix&#8217;s PIM semiconductor product name, which includes GDDR6-AiM.</p>
<p class="source" style="text-align: center;"><img loading="lazy" decoding="async" class="size-full wp-image-12913 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08.png" alt="The AiMX card utilizes multiple GDDR6-AiM chips for enhanced performance" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 2. The prototype AiMX card utilizes multiple GDDR6-AiM chips for enhanced performance</p>
<p>&nbsp;</p>
<p>As a low-power, high-speed memory solution capable of handling large amounts of data, AiMX is set to play a key role in the advancement of data-intensive generative AI<sup>5</sup> systems. The performance of generative AI improves as it is trained on more data, highlighting the need for high-performance products which can be applied to an array of generative AI systems.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Generative AI</strong>: AI that learns from large amounts of data to actively generate results based on a user&#8217;s specific needs.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-12910" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05.png" alt="Demonstrating a large AI language model with AiMX that utilizes GDDR6-AiM" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 3. Demonstrating a large AI language model with AiMX that utilizes GDDR6-AiM</p>
<p>&nbsp;</p>
<p>SK hynix also demonstrated Meta&#8217;s generative AI Open Pretrained Transformer (OPT) 13B model on a server system equipped with the AiMX prototype. The AiMX system featuring GDDR6-AiM chips reduces data processing time by more than 10 times compared to systems with GPUs, while consuming one-fifth the power. The company&#8217;s demonstration piqued the interest of global companies providing AI services by showing that it can deliver higher performance<sup>6</sup> than the most recent accelerators.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>6</sup>Performance is based on the condition that the AiM Control Hub inside the AiMX card is developed as an application-specific integrated circuit (ASIC).</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094308/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_06.png" alt="Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094319/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_07.png" alt="Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 4. Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX</p>
<p>&nbsp;</p>
<p>In addition, the company held a session outlining the benefits of AiMX. In a presentation titled &#8220;Cost-Effective Generative AI Inference Acceleration using AiM,&#8221; Eui-cheol Lim, vice president of the Solution Development division, compared the performance of GPUs and AiMX and discussed the future of next-generation intelligent semiconductor memories.</p>
<p>&#8220;SK hynix&#8217;s AiMX is a solution that delivers higher performance while consuming less power, and costing less than conventional GPUs,&#8221; Lim explained. &#8220;We will continue to develop memory technologies that will lead the way in the era of artificial intelligence.&#8221;</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/">SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card ‘AiMX’ for Generative AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How SK hynix is Set  to Power the Generative AI Revolution</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/how-sk-hynix-is-set-to-power-the-generative-ai-revolution/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Tue, 23 May 2023 06:00:52 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[HBM3]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[SK hynix]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=11709</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="size-full wp-image-11738 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/22003547/SK-hynix_Generative-AI-Infographic_EN_013.gif" alt="" width="1000" height="1132" /><img loading="lazy" decoding="async" class="size-full wp-image-11736 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19040511/SK-hynix_Generative-AI-Infographic_EN_021.gif" alt="" width="1000" height="1044" /><img loading="lazy" decoding="async" class="size-full wp-image-11714 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021510/SK-hynix_Generative-AI-Infographic_EN_03.gif" alt="" width="1000" height="885" /><img loading="lazy" decoding="async" class="size-full wp-image-11715 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021552/SK-hynix_Generative-AI-Infographic_EN_04.gif" alt="" width="1000" height="810" /><img loading="lazy" decoding="async" class="size-full wp-image-11716 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021614/SK-hynix_Generative-AI-Infographic_EN_05.gif" alt="" width="1000" height="1078" /><img loading="lazy" decoding="async" class="size-full wp-image-11717 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021727/SK-hynix_Generative-AI-Infographic_EN_06.gif" alt="" width="1000" height="856" /><img loading="lazy" decoding="async" class="size-full wp-image-11718 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021745/SK-hynix_Generative-AI-Infographic_EN_07.gif" alt="" width="1000" height="765" /><img loading="lazy" decoding="async" class="size-full wp-image-11732 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19030037/SK-hynix_Generative-AI-Infographic_EN_0809.gif" alt="" width="1000" height="1434" /><img loading="lazy" decoding="async" class="size-full wp-image-11721 aligncenter" style="margin: 0;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/05/19021829/SK-hynix_Generative-AI-Infographic_EN_10.gif" alt="" width="1000" height="666" /></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/how-sk-hynix-is-set-to-power-the-generative-ai-revolution/">How SK hynix is Set  to Power the Generative AI Revolution</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Presents Its Green Digital Solution at CES 2023</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ces-2023/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 06 Jan 2023 06:00:51 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[PS1010 E3.S]]></category>
		<category><![CDATA[Green Digital Solution]]></category>
		<category><![CDATA[Carbon-Free Future]]></category>
		<category><![CDATA[CES2023]]></category>
		<category><![CDATA[PRISM]]></category>
		<category><![CDATA[SK hynix]]></category>
		<category><![CDATA[ESG]]></category>
		<category><![CDATA[HBM3]]></category>
		<category><![CDATA[TeamSK]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[CXL]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=10694</guid>

					<description><![CDATA[<p>CES has returned to the U.S. desert this year as the world’s biggest tech event kicked off at the Las Vegas Convention Center on January 5th. Featuring over 3,000 exhibitors from more than 170 countries, CES 2023 is set to attract thousands of attendees from around the world through January 8th. SK hynix along with [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ces-2023/">SK hynix Presents Its Green Digital Solution at CES 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>CES has returned to the U.S. desert this year as the world’s biggest tech event kicked off at the Las Vegas Convention Center on January 5th. Featuring over 3,000 exhibitors from more than 170 countries, CES 2023 is set to attract thousands of attendees from around the world through January 8<sup>th</sup>. SK hynix along with seven other SK affiliates are showcasing their latest innovations in 1,223 square meters of exhibition space. As the past two events in 2021 and 2022 took place online and partially online, respectively, this is the first time in three years for some tech giants to attend in-person and discuss key issues including how innovation is addressing global challenges.</p>
<h3 class="tit">SK Group’s Vision of a Carbon-Free Future</h3>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045040/01_CES_2023-SK-group-Exhibition.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045049/02_CES_2023-SK-group-Exhibition.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045057/03_CES_2023-SK-group-Exhibition.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045106/04_CES_2023-SK-group-Exhibition.png" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">Figure 1. SK hynix and seven other affiliates of SK group presented their &#8216;Net Zero&#8217; aspiration under the slogan &#8220;Together in Action&#8221;.</p>
<p>&nbsp;</p>
<p>SK companies have teamed up with 10 US-based partners to present their booth under the slogan “Together in Action,” the group’s message to turn their Net Zero aspiration into action. At CES 2022, the group committed to reduce carbon emissions by 200 million tons by 2030, or 1% of the global carbon reduction targets while maintaining its goal of achieving Net Zero by 2050.</p>
<p>This year, eight of SK’s affiliate companies are showcasing a total of 40 carbon reducing technologies as part of the group’s vision of a carbon-free future. SK hynix is expected to attract major tech customers and experts to its booth with its “Green Digital Solution,” the theme of its product lineup at this year’s event.</p>
<h3 class="tit">A Green Digital Solution</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045324/01-Products-Overview.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045336/02-PS1010-E3.S.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045401/04-PIM-GDDR6-AiM.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045414/05-CXL-Memory.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045348/03-HBM3.png" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">Figure 2. SK hynix is exhibiting a range of new and core products which offer significant performance improvements while lessening their environmental impact.</p>
<p>&nbsp;</p>
<p>SK hynix is exhibiting a range of new and core products at this year’s CES which will offer significant improvements in performance and energy efficiency while lessening the impact on the environment. These products meet the growing needs of global tech companies which are requiring more powerful memory solutions for evolving advanced technologies such as AI, Big Data, autonomous driving, and the Metaverse.</p>
<p>The focal point of SK hynix’s exhibition is the PS1010 E3.S, the company’s latest enterprise solid-state drive (eSSD) which is being launched at CES 2023. The product features multiple 176-layer 4D NAND flash technology that supports PCIe 5.0 (Peripheral Component Interconnect Express) interface. The PS1010 E3.S, which includes a self-developed controller and firmware, offers increased reading and writing speed of 130% and 49%, respectively, compared to the previous generation of products. Moreover, as its performance per watt is 75% more efficient than its predecessors, it will help to reduce server operating costs and cut carbon emissions. The product is set to strengthen SK hynix’s competitiveness in the NAND sector.</p>
<p>Visitors to SK hynix’s booth can also see its <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-to-supply-industrys-first-hbm3-dram-to-nvidia/" target="_blank" rel="noopener noreferrer">HBM3, the world’s best-performing DRAM</a></span> developed and mass produced by the company in an industry-first. As a Gen 4 HBM (High Bandwidth Memory) product, it offers increased data processing speeds compared to existing DRAMs, while also improving power efficiency by 23% compared to HBM2. Displayed alongside the HBM3 is another leading innovation, <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer">the GDDR6-AiM</a></span>. This AI accelerator offers its own computational capabilities that significantly increase certain computation speeds and reduces energy consumption by 80% compared to existing products.</p>
<p>Last summer, SK hynix reinforced its superiority in <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-ddr5-dram-cxltm-memory-to-expand-the-cxl-memory-ecosystem/" target="_blank" rel="noopener noreferrer">Compute Express Link (CXL) solutions</a></span> by developing its first DDR5 DRAM-based CXL memory samples which have flexible expansions of memory capacity and performance. The company also collaborated with SK Telecom to produce its Computational Memory Solution (CMS), the industry’s first CXL memory that’s integrated with computational functions. SK hynix’s CXL portfolio, which is planned to be mass-produced later this year, can also be seen at CES.</p>
<p>&nbsp;</p>
<h3 class="tit">Sustainability Initiatives</h3>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-10705" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06043946/Sk-hynix_Sketch-Article_sustainbility-initiatives.png" alt="" width="1000" height="660" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06043946/Sk-hynix_Sketch-Article_sustainbility-initiatives.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06043946/Sk-hynix_Sketch-Article_sustainbility-initiatives-606x400.png 606w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06043946/Sk-hynix_Sketch-Article_sustainbility-initiatives-768x507.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Figure 3. Visitors experienced SK hynix&#8217;s innovative technology presented under the theme of a &#8220;Green Digital Solution&#8221;.</p>
<p>&nbsp;</p>
<p>It is clear from the SK hynix product lineup at CES 2023 that the company is focused on simultaneously pushing the limits of technology while promoting sustainability in order to solve market and environmental challenges. These products align with SK hynix’s overall ESG strategy which drive the company’s future plans.</p>
<p>One of the key aspects of SK hynix’s ESG initiatives is undoubtedly <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-prism-framework/" target="_blank" rel="noopener noreferrer">the PRISM framework</a></span> which was unveiled last year to enhance its ESG management. Detailing SK hynix’s ESG goals, PRISM relays the message that the company will transparently communicate with stakeholders while also reflecting its goal of spreading a positive influence around the world.</p>
<p>As a leader in the semiconductor industry’s fight against climate change, SK hynix joined <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-becomes-founding-member-of-scc/" target="_blank" rel="noopener noreferrer">the Semiconductor Climate Consortium (SCC)</a></span> as a founding member last October. The SCC is the first global consultative body formed to reduce greenhouse gas emissions throughout the semiconductor value chain. SK hynix’s efforts to tackle greenhouse gases are further exemplified by its carbon footprint certification for its eSSD and cSSD products. Issued by the U.K. climate change organization Carbon Trust last summer, the certification assesses the impact of carbon emissions throughout the entire lifecycle of a product.</p>
<h3 class="tit">Building a Greener Tomorrow</h3>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-10715" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045535/Figure-4.-Building-a-Greener-Tomorrow.png" alt="" width="1000" height="660" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045535/Figure-4.-Building-a-Greener-Tomorrow.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045535/Figure-4.-Building-a-Greener-Tomorrow-606x400.png 606w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/01/06045535/Figure-4.-Building-a-Greener-Tomorrow-768x507.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Figure 4. SK hynix is working with other SK affiliates to build a greener tomorrow.</p>
<p>&nbsp;</p>
<p>At a time when people are concerned about the environmental impact of their products, SK hynix is responding to their needs with its range of energy-efficient, high-performance products displayed at CES 2023. SK hynix is using this opportunity to discuss with fellow industry insiders and tech consumers about key industry and environmental issues to build a greener tomorrow.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ces-2023/">SK hynix Presents Its Green Digital Solution at CES 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix to Showcase Energy-Efficient, High-Performance Memory Products at CES 2023</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-to-showcase-energy-efficient-high-performance-memory-products-at-ces-2023/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 26 Dec 2022 23:30:48 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Press Release]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[HBM3]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[PS1010]]></category>
		<category><![CDATA[CES2023]]></category>
		<category><![CDATA[Green Digital Solution]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=10586</guid>

					<description><![CDATA[<p>News Highlights Core and brand-new products introduced under the theme of “Green Digital Solution” Introduction of eSSD with ultrahigh-performance to solidify SK hynix’s leadership in server memory market Solution to solve customers’ pain point proposed Seoul, December 27, 2022 SK hynix Inc. (or “the company”, www.skhynix.com) announced today that it will showcase a number of [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-to-showcase-energy-efficient-high-performance-memory-products-at-ces-2023/">SK hynix to Showcase Energy-Efficient, High-Performance Memory Products at CES 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<h3 class="tit">News Highlights</h3>
<ul style="color: #000; font-size: 18px; padding-left: 20px;">
<li>Core and brand-new products introduced under the theme of “Green Digital Solution”</li>
<li>Introduction of eSSD with ultrahigh-performance to solidify SK hynix’s leadership in server memory market</li>
<li>Solution to solve customers’ pain point proposed</li>
</ul>
<h3 class="tit"></h3>
<h3 class="tit">Seoul, December 27, 2022</h3>
<p>SK hynix Inc. (or “the company”, <a href="https://urldefense.com/v3/__https:/www.skhynix.com/eng/main.do__;!!N96JrnIq8IfO5w!mA80I9OXgyLho-eXDg2fttNQQBXKvVfOSZvkXNmFsgQDbCQq6zwGJB84bBRElqnJHAiFZkquLcIEPfIGPD46jqgrwXPETlQ$" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">www.skhynix.com</span></a>) announced today that it will showcase a number of its core and brand-new products at the CES 2023, the most influential tech event in the world taking place in Las Vegas from Jan 5<sup>th</sup> through Jan 8<sup>th</sup>.</p>
<p>The products, introduced under the theme of the “Green Digital Solution,” as part of the SK Group’s “Carbon-Free Future” campaign, are expected to attract Big Tech customers and experts given the significant improvement in performance and energy efficiency compared with the previous generation as well as the effect of lessening the impact on the environment.</p>
<p>Attention on energy-efficient memory chips has been on the rise as global tech companies pursue products that process data faster, while consuming less energy. SK hynix is confident that its products to be displayed at the CES2023 will meet customers’ such needs with outstanding performance per watt* and performance.</p>
<p style="font-size: 14px; font-style: italic; color: #555;">* Performance per watt: an indicator of how much computation is performed per watt of power consumed.</p>
<p>The core product put forward at the show is PS1010 E3.S, an eSSD product composed of multiple 176-layer 4D NAND that supports the fifth generation of the PCIe* interface.</p>
<p style="font-size: 14px; font-style: italic; color: #555;">*PCIe (Peripheral Component Interconnect Express): a high-speed input/output series interface used in the mainboard of digital devices. PCIe’s data-tranfer speed doubles in accordance with a generation shift.</p>
<h3 class="tit"><img loading="lazy" decoding="async" class="size-full wp-image-10594 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/12/26093443/CES2023_SK-hynix-Products.png" alt="" width="1000" height="707" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/12/26093443/CES2023_SK-hynix-Products.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/12/26093443/CES2023_SK-hynix-Products-566x400.png 566w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/12/26093443/CES2023_SK-hynix-Products-768x543.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></h3>
<p>SK hynix said that the introduction of the PS1010, a combination of the company’s industry-leading technologies, was made timely as the server chip market continues to grow despite the current industry downturn.</p>
<p>The PS1010 product shows improvement both in reading and writing speed by 130% and 49%, respectively, compared with the previous generation. Its performance-per-watt is also improved by more than 75%, helping customers reduce costs to run servers and carbon emission.</p>
<p>“We’re proud to launch PS1010, an ultrahigh-performance product with self-developed controller and firmware, at the CES 2023, the world’s largest technology show,” Yun Jae Yeun, Head of NAND Product Planning, said. “This product will solve pain point of our server-chip customers, while paving the way for a stronger competitiveness in NAND business for us.”</p>
<p>Other products to be introduced at the show are HBM3*, a memory product with the world’s best specification for high performance computing, GDDR6-AiM that adopts the PIM* technology and CXL* memory capable of flexible expansion of memory capacity and performance.</p>
<p style="font-size: 14px; font-style: italic; color: #555;">* HBM (High Bandwidth Memory): High-value, high-performance memory that vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products.</p>
<p style="font-size: 14px; font-style: italic; color: #555;">* PIM (Processing In Memory): A next-generation technology that provides a solution for data congestion issues for AI and big data by adding computational functions to semiconductor memory</p>
<p style="font-size: 14px; font-style: italic; color: #555;">* CXL (Compute Express Link): A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based</p>
<p>SK hynix will also present the immersion cooling* technology of SK enmove, which specializes in energy efficiency. The technology, designed to help cool down the heat of the servers generated during the operation, marks a successful case where SK hynix cooperated with other SK companies or external business partners to create new values in the semiconductor business.</p>
<p style="font-size: 14px; font-style: italic; color: #555;">* Immersion Cooling: A next-generation thermal-management technology that cools down temperature by submerging data servers into cooling oil. This way, the total electricity consumption can be reduced by 30% compared with the existing technology that uses air to cool down temperature.</p>
<h3 class="tit">About SK hynix Inc.</h3>
<p>SK hynix Inc., headquartered in Korea, is the world’s top tier semiconductor supplier offering Dynamic Random Access Memory chips (“DRAM”), flash memory chips (&#8220;NAND flash&#8221;) and CMOS Image Sensors (&#8220;CIS&#8221;) for a wide range of distinguished customers globally. The Company’s shares are traded on the Korea Exchange, and the Global Depository shares are listed on the Luxembourg Stock Exchange. Further information about SK hynix is available at <span style="text-decoration: underline;"><a href="https://urldefense.com/v3/__https:/www.skhynix.com/eng/main.do__;!!N96JrnIq8IfO5w!mA80I9OXgyLho-eXDg2fttNQQBXKvVfOSZvkXNmFsgQDbCQq6zwGJB84bBRElqnJHAiFZkquLcIEPfIGPD46jqgrwXPETlQ$" target="_blank" rel="noopener noreferrer">www.skhynix.com</a></span>, <span style="text-decoration: underline;"><a href="https://urldefense.com/v3/__https:/news.skhynix.com/__;!!N96JrnIq8IfO5w!mA80I9OXgyLho-eXDg2fttNQQBXKvVfOSZvkXNmFsgQDbCQq6zwGJB84bBRElqnJHAiFZkquLcIEPfIGPD46jqgroMl7UVQ$" target="_blank" rel="noopener noreferrer">news.skhynix.com</a></span>.</p>
<h3 class="tit">Media Contact</h3>
<p>SK hynix Inc.<br />
Global Public Relations</p>
<p><em>Technical Leader</em><br />
Kanga Kong<br />
E-Mail: <a href="mailto:global_newsroom@skhynix.com">global_newsroom@skhynix.com</a></p>
<p><em>Technical Leader</em><br />
Jaehwan Kevin Kim<br />
E-Mail: <a href="mailto:global_newsroom@skhynix.com">global_newsroom@skhynix.com</a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-to-showcase-energy-efficient-high-performance-memory-products-at-ces-2023/">SK hynix to Showcase Energy-Efficient, High-Performance Memory Products at CES 2023</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Successfully Establishes Itself as a Total Solution Provider at OCP Global Summit 2022</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ocp-global-summit-2022/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 20 Oct 2022 08:00:27 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[PCIe Gen5]]></category>
		<category><![CDATA[CMS]]></category>
		<category><![CDATA[OCP Global Summit]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[DDR5]]></category>
		<category><![CDATA[PS1010]]></category>
		<category><![CDATA[Computer Memory Solution]]></category>
		<category><![CDATA[Open Computer Project]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=10100</guid>

					<description><![CDATA[<p>As the digital era evolves, so does its complexity. Among the aspects that are undergoing the most significant changes is the data center environment, which today demands higher density and efficiency levels than ever before. The Open Compute Project (OCP) is an initiative that proposes a new direction in which both software and hardware disciplines [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ocp-global-summit-2022/">SK hynix Successfully Establishes Itself as a Total Solution Provider at OCP Global Summit 2022</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>As the digital era evolves, so does its complexity. Among the aspects that are undergoing the most significant changes is the data center environment, which today demands higher density and efficiency levels than ever before.</p>
<p>The <span style="text-decoration: underline;"><a href="https://www.opencompute.org/" target="_blank" rel="noopener noreferrer">Open Compute Project</a></span> (OCP) is an initiative that proposes a new direction in which both software and hardware disciplines are tightly connected in order to efficiently address the increasing requirements of compute infrastructure. It gathers major decision-makers, technologists, engineer developers, and suppliers to exchange insights, discuss challenges, and provide solutions in and around the data center.</p>
<p>The OCP is also characterized by open source and open collaboration, with a global committee covering everywhere from the telecommunications industry to edge infrastructure, and everything in between.</p>
<p>The <span style="text-decoration: underline;"><a href="https://www.opencompute.org/summit/global-summit" target="_blank" rel="noopener noreferrer">OCP Global Summit</a></span>, held in the Fall each year, provides a unique opportunity to see and explore the various ways in which the data center industry is innovating and evolving. SK hynix attended the 2022 OCP Global Summit in San Jose, California, from October 18 to 20, to demonstrate its latest products, technologies, and ideas.</p>
<h3 class="tit">SK hynix, a Proud Supporter of OCP</h3>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-10101" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20015814/SK-hynix_OCP-2022-_Event-Sketch-01.png" alt="" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20015814/SK-hynix_OCP-2022-_Event-Sketch-01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20015814/SK-hynix_OCP-2022-_Event-Sketch-01-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20015814/SK-hynix_OCP-2022-_Event-Sketch-01-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20015814/SK-hynix_OCP-2022-_Event-Sketch-01-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Image 1. SK hynix’s booth at OCP Global Summit 2022 drew enthusiasm from participants and visitors</p>
<p>&nbsp;</p>
<p>As a Ruby Sponsor, SK hynix set up a booth at OCP with the theme “Driving New Level of Performance for your Ultimate Data Experience,” showcasing latest research developments in next-generation memory technologies. Having attended every year since 2016, this year’s event was particularly memorable as it reverted to a face-to-face format for the first time since the Covid pandemic began.</p>
<p>At the event, the company unveiled the PCIe Gen5 PS1010 for the first time as well as highlighted offerings such as the <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-ddr5-dram-cxltm-memory-to-expand-the-cxl-memory-ecosystem/" target="_blank" rel="noopener noreferrer">DDR5 DRAM-based CXL (Compute Express Link) memory solution</a></span> and next-generation intelligent semiconductor <span style="text-decoration: underline;"><a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer">GDDR6-AiM</a></span>, demonstrating how the company has truly become a total solution provider.</p>
<h3 class="tit">Demonstrating a Complete Solution Portfolio</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021009/SK-hynix_OCP-2022-_Event-Sketch-12.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021021/SK-hynix_OCP-2022-_Event-Sketch-06.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021032/SK-hynix_OCP-2022-_Event-Sketch-03.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021042/SK-hynix_OCP-2022-_Event-Sketch-02.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021053/SK-hynix_OCP-2022-_Event-Sketch-05.png" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021105/SK-hynix_OCP-2022-_Event-Sketch-04.png" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source">Image 2. SK hynix demonstrated a portfolio of products on-site at the OCP Global Summit 2022 including a first look at PS1010</p>
<p>&nbsp;</p>
<p>Four products were demonstrated by SK hynix at the OCP Global Summit 2022, starting with the PS1010. The PS1010 is a PCle (Peripheral Component Interconnect Express) Gen5 total solution product that handles all SoC/DRAM/NAND and operations in-house, with data transfer speeds more than doubling over the previous generation due to the usage of the PCIe Gen5 interface. It was the first time SK hynix unveiled the product, emphasizing its use of industry-leading V7 NAND to ensure cost and performance competitiveness.</p>
<p>Second to be featured was the company’s DDR5 DRAM-based CXL memory solution that integrates computational functions into CXL memory for the first time in the industry. The solution is expected to be installed on next-generation server platforms in order to improve system performance and energy efficiency.</p>
<p>Third, a sample of the EDSFF (Enterprise &amp; Data Center Standard Form Factor) E3.S was demonstrated following the announcement of the development of the latest DDR5 24Gb DRAM-based 96GB CXL memory sample. The combination of DDR5 and CXL memory was shown to reduce memory execution time by 20% and increase bandwidth by 50%, resulting in a 60% memory capacity expansion impact.</p>
<p>Finally, a sample of GDDR6-AiM (Accelerator in Memory) demonstrated the behavior of a huge AI model aimed at memory-centric computing. GDDR6-AiM combines processing function onto a typical DRAM, and can reduce energy consumption by up to 80% while providing up to 16 times the performance and lower operating voltage.</p>
<h3 class="tit">Thought Leadership in Memory Solutions</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-10109 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021115/SK-hynix_OCP-2022-_Event-Sketch-11.png" alt="" width="1000" height="562" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021115/SK-hynix_OCP-2022-_Event-Sketch-11.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021115/SK-hynix_OCP-2022-_Event-Sketch-11-680x382.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021115/SK-hynix_OCP-2022-_Event-Sketch-11-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Image 3. Thomas Won Ha Choi sharing SK hynix&#8217;s perspective on the value of its new CXL memory</p>
<p>&nbsp;</p>
<p>Besides the demonstrations that attracted a great deal of attention, SK hynix also participated in three session discussions at the OCP Global Summit. Thomas Won Ha Choi, Director of DRAM Product Planning at SK hynix, shared the company&#8217;s perspective on the new value of CXL memory, as well as its vision for planning and enabling the future development of robust CXL memory. He presented SK hynix’s various memory solutions to provide examples of how CXL devices can enhance memory system values.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-10110 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021133/SK-hynix_OCP-2022-_Event-Sketch-08.png" alt="" width="1000" height="752" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021133/SK-hynix_OCP-2022-_Event-Sketch-08.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021133/SK-hynix_OCP-2022-_Event-Sketch-08-532x400.png 532w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/10/20021133/SK-hynix_OCP-2022-_Event-Sketch-08-768x578.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">Image 4. Donguk Moon introduces SK hynix&#8217;s CXL-based CMS at his session discussion</p>
<p>&nbsp;</p>
<p>Donguk Moon, SK hynix&#8217;s Senior Technical Program Manager for Memory System Research, also introduced SK hynix’s CXL-based CMS, as well as collaboration with SK Telecom, one of Korea’s major telecommunications companies, to improve their <span style="text-decoration: underline;"><a href="https://lightningdb.io/" target="_blank" rel="noopener noreferrer">Lightning DB</a></span> real-time data management system by incorporating CMS into it.</p>
<h3 class="tit">A Future-Oriented Approach to Data Center Research</h3>
<p>While SK hynix has concluded its successful participation in the OCP Global Summit this year, the company&#8217;s research continues, beyond the contents presented in the last few days. Next year, SK hynix looks forward to continuing to present important development achievements and new products for the future.</p>
<p>Observing the enthusiasm and passion shown by participants and visitors was inspiring, and further motivates SK hynix’s determination to serve as the ultimate total solution provider for the data center industry.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-at-ocp-global-summit-2022/">SK hynix Successfully Establishes Itself as a Total Solution Provider at OCP Global Summit 2022</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Let PIM Do the Learning: The Brainpower Behind the AI Memory Chip</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/let-pim-do-the-learning-the-brainpower-behind-the-ai-memory-chip/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 17 Jun 2022 07:00:57 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[ISSCC]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=9359</guid>

					<description><![CDATA[<p>When IBM-developed computer Watson beat out its human competitors on the quiz show Jeopardy in 2011, it was thought to be the beginning of the end of the superior reign of human intelligence. Watson brought discussions of AI to the mainstream. Its ability to apply machine learning to gather and analyze massive amounts of data [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/let-pim-do-the-learning-the-brainpower-behind-the-ai-memory-chip/">Let PIM Do the Learning: The Brainpower Behind the AI Memory Chip</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="size-full wp-image-9360 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045057/SK-hynix_Let-PIM-Do-the-Learning_thumbnail.png" alt="" width="680" height="400" /></p>
<p>When IBM-developed computer Watson beat out its human competitors on the quiz show Jeopardy in 2011, it was thought to be the beginning of the end of the superior reign of human intelligence. Watson brought discussions of AI to the mainstream. Its ability to apply machine learning to gather and analyze massive amounts of data in a flash was something most thought exclusive to sci-fi.</p>
<p>Quintillions of bytes of data are now being generated each day, with the <a class="-as-ga" style="text-decoration: underline;" href="https://www.statista.com/statistics/871513/worldwide-data-created/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.statista.com/statistics/871513/worldwide-data-created/">amount of data generated by 2025</a> predicted to be 181 zettabytes. While this volume of data exceeds far beyond the realm of human consumption, cloud computing, faster processing, faster networks, and faster chips mean it can be processed and applied efficiently. AI isn’t a pipe dream &#8211; it’s a reality.</p>
<h3>From Synapses to Circuits</h3>
<p>Semiconductors supporting AI functions must capitalize on space and provide means for parallel processing for complex tasks. Enter, Processing in Memory chips. The so-called PIM chip integrates a processor with Random Access Memory (RAM) on a single memory module. This structure removes the boundary between memory and system semiconductors, allowing data storage and data processing to happen in the same place.</p>
<p>By eliminating the need for data to traverse modules, response times are greatly improved, allowing for <a class="-as-ga" style="text-decoration: underline;" href="https://www.techtarget.com/searchbusinessanalytics/definition/processing-in-memory-PIM" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.techtarget.com/searchbusinessanalytics/definition/processing-in-memory-PIM">real-time data processing.</a> More traditional computer architectures, which manage processing and storage in separate modules, often fall prey to latency issues, commonly referred to as the von Neumann bottleneck. Adding processing functions to memory semiconductors presents a unique solution to overcome this long-standing problem.</p>
<p>SK hynix <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/">unveiled its next-generation PIM</a> in February 2022 at ISSCC in San Francisco. The GDDR6-AiM (Accelerator in Memory) adds computational functions to GDDR6 memory chips, allowing for data to be processed at speeds of up to 16 Gbps.</p>
<p>GDDR6-AiM is also more energy efficient, reducing power consumption by 80% by removing data movement to the CPU and GPU. Advancing technology in a manner that supports a greener and more equitable world is an integral part of SK hynix future vision. GDDR6-AiM can help reduce carbon emissions and shrink the carbon footprint of any technology it’s applied to, advancing <a class="-as-ga" style="text-decoration: underline;" href="https://www.skhynix.com/sustainability/UI-FR-SA1601/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.skhynix.com/sustainability/UI-FR-SA1601/">SK hynix’s ESG-related goals</a> and expanding their positive impact across their clients’ industries.</p>
<p>While particularly effective in managing the needs of AI-based systems, PIM can be applied to a broad spectrum of technologies. Databases, query engines, data grids, and more all require some version of data storage and processing coupled with custom applications leveraging a variety of inputs.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-9361" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045059/GDDR6-AiM_01.jpg" alt="" width="1000" height="614" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045059/GDDR6-AiM_01.jpg 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045059/GDDR6-AiM_01-651x400.jpg 651w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045059/GDDR6-AiM_01-768x472.jpg 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">The next generation of smart memory</p>
<h3>Machine Learning vs. Deep Learning</h3>
<p>Unbeknownst to many, artificial intelligence is a broad term that describes the science of creating machines that think like humans. The term machine learning marks functionalities that enable computers to perform tasks without explicit programming and includes deep learning, a subset that relies on artificial neural networks.</p>
<p>Deep learning can be seen as the most independent AI system as it manages both <a class="-as-ga" style="text-decoration: underline;" href="https://www.computer.org/publications/tech-news/trends/deep-learning-vs-machine-learning-whats-the-difference" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.computer.org/publications/tech-news/trends/deep-learning-vs-machine-learning-whats-the-difference">feature input and classification.</a> These systems also require vast amounts of data and rely on parallel processes as their algorithms are primarily self-directed once trained.</p>
<p>AI machines, including deep learning models, are already a part of our lives. There are countless real-world AI applications, which only stand to increase. Everything from mobile devices to autonomous vehicles utilize AI models for tasks like location-based recommendation, auto-braking, camera-based object classification, and navigation through complex environments.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-9362" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045102/SK-hynix_Let-PIM-Do-the-Learning.png" alt="" width="1000" height="551" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045102/SK-hynix_Let-PIM-Do-the-Learning.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045102/SK-hynix_Let-PIM-Do-the-Learning-680x375.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/06/15045102/SK-hynix_Let-PIM-Do-the-Learning-768x423.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source">The art of computationally mimicking human intelligence takes many forms</p>
<h3>Overcoming the Challenges</h3>
<p>The road to PIM development was not without detours, roadblocks, and congestion. As the technology continues to advance, there are still obstacles to surmount across design, manufacturing, cost, and more.</p>
<p>Designing PIM requires the application of novel approaches to chip structures. Traditional semiconductors do not need to accommodate near-memory queues or perform parallel functions in the same way PIM chips do. Once onto the manufacturing stage, space and distance considerations become paramount. It is crucial to reduce how far signals must travel without increased cost or risk of thermal issues.</p>
<p>Furthermore, integrated chips such as PIM have an increased dependency on memory – a unique feature that is both a blessing and a curse. Any damage to the memory components could result in compromised data.</p>
<p>With the AI market expected <a class="-as-ga" style="text-decoration: underline;" href="https://www.statista.com/statistics/607716/worldwide-artificial-intelligence-market-revenues/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.statista.com/statistics/607716/worldwide-artificial-intelligence-market-revenues/">to reach $190 billion by 2025,</a> investment in AI is ripe. According to a Boston Consulting Group and MIT Sloan Management Review study, <a class="-as-ga" style="text-decoration: underline;" href="https://www.forbes.com/sites/louiscolumbus/2017/09/10/how-artificial-intelligence-is-revolutionizing-business-in-2017/?sh=53667e385463" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.forbes.com/sites/louiscolumbus/2017/09/10/how-artificial-intelligence-is-revolutionizing-business-in-2017/?sh=53667e385463">83% of businesses</a> say AI is a strategic priority. SK hynix will continue to advance its expertise in the area and lead this growing sector in the years to come.</p>
<p><iframe loading="lazy" title="SK hynix GDDR6-AiM (Accelerator in memory)" width="1080" height="608" src="https://www.youtube.com/embed/rTULRWpbd1k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/let-pim-do-the-learning-the-brainpower-behind-the-ai-memory-chip/">Let PIM Do the Learning: The Brainpower Behind the AI Memory Chip</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 24 Mar 2022 07:00:24 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Edge Computing]]></category>
		<category><![CDATA[Application]]></category>
		<category><![CDATA[Neuromorphic Semiconductor]]></category>
		<category><![CDATA[AI Chip]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=8640</guid>

					<description><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence (AI), which is regarded as the ‘the most significant paradigm shift in history,’ is becoming the center of our lives in remarkable speed. From autonomous vehicles, AI assistants to neuromorphic semiconductor that mimics the human brain, artificial intelligence has already exceeded human intelligence and learning speed, and is now quickly being applied across various areas by affecting many aspects of our lives. What are the key applications of AI technology and how is it realized?</p>
<p>(Check <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/the-present-and-future-of-ai-semiconductor/">here</a> to discover more insights from SNU professor Deog-Kyoon Jeong about AI semiconductor!)</p>
<h3 class="tit">Cloud Computing vs. Edge Computing</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" alt="" /></p>
<p class="source">Figure 1. Cloud Computing vs. Edge Computing</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050037/220317_Figure_1.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>One AI application, which is an antipode to cloud services, is edge computing<sup>1</sup>. Applications that require processing massive amounts of input data such as videos or image data must process data using edge computing or transfer the data to a cloud service through wired or wireless communication preferably by reducing the amount of data. Accelerators specifically designed for edge computing for this purpose take up a huge part of AI chip design. AI chips used in autonomous driving are a good example. These chips perform image classification and object detection by processing images that contain massive amounts of data using CNN<sup>2</sup> and a series of neural operations.</p>
<h3 class="tit">AI and the Issue of Privacy</h3>
<p><!-- 이미지 롤링 swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" alt="" /></p>
<p class="source">Figure 2. Amazon’s Alexa<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/">NY Times</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050042/220317_Figure_2._AmazonAlexa.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" alt="" /></p>
<p class="source">Figure 2. SK Telecom’s NUGU<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.nugu.co.kr/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.nugu.co.kr/">SKT NUGU</a> )</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050045/220317_Figure_2._SKT_NUGU.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
</div>
</div>
<p><!-- btn / paging --></p>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p><!-- // 이미지 롤링 swiper start --></p>
<p>Another area of AI application is conversational services like Amazon’s Alexa or SK Telecom’s NUGU. However, such services cannot be used widely if privacy is not protected. Conversational AI services, where conversations at home are continuously eavesdropped by a microphone, cannot be developed beyond a simple recreational service by nature, and therefore, many efforts are being made to resolve these privacy issues.</p>
<p>The latest research trend in solving the privacy issue is homomorphic encryption<sup>3</sup> . Homomorphic encryption does not transmit users’ voice or other sensitive information such as medical data as is. It is a form of encryption that allows computations of multiplication and addition on encrypted data in the form of ciphertext, which only the user can decrypt, on a cloud service without first decrypting it. The outcome or results are sent to the user again in an encrypted form and only the user can decrypt to see the results. Therefore, no one including the server can see the original data other than the individual user. Homomorphic service requires an immense amount of computation up to several thousand or tens of thousand times more compared to the general plaintext DNN<sup>4</sup>service. The key area for research in the future will be around reducing the service time by dramatically enhancing computation performance through specially designed homomorphic accelerators<sup>5</sup>.</p>
<h3 class="tit">AI Chip and Memory</h3>
<p>In a large-scale DNN, the number of weights is too high to contain all of them in a processor. As a result, it has to make a read access whenever it requires a weight stored in an external large capacity DRAM and bring it to the processor. If a weight is used only once and cannot be reused after accessing it, the data that was pulled with considerable amount of energy and time consumption will be wasted. This is an extremely inefficient method as it consumes additional time and energy compared to storing and utilizing all weights in the processor. Therefore, processing an intense amount of data using enormous number of weights in large-scale DNN requires a parallel connection and/or a batch operation that uses the same weights over several times. In other words, there is a need to perform computations by connecting several processors with DRAMs in parallel to disperse and store weight or intermediate data in several DRAMs to reuse them. High speed connection among processors is essential in this structure, which is more efficient compared to having all processors access through one route. And only this structure can deliver the maximum performance.</p>
<h3 class="tit">Interconnection of AI Chips</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" alt="" /></p>
<p class="source">Figure 3. Interconnection Network of AI Chips</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/18084455/220318_SK-hynix_0308_02.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The performance bottleneck that occurs when connecting numerous processors depends on the provided bandwidth, latency as well as the form of interconnection. These elements define the size and performance of the DNN. In other words, if one were to deliver ‘N-times’ higher performance by connecting ‘N’ number of accelerators in parallel, bottleneck occurs in the latency and bandwidth provided by the interconnections and will not be able to deliver the performance as one desires.</p>
<p>Therefore, the interconnection structure between a processor and another is crucial in efficiently providing the scalability of performance. In the case of NVIDIA A100 GPU, NVLink 3.0 plays that role. There are 12 NVLink channels in this GPU and each provides 50 GBps in bandwidth. Connecting 4 GPUs together can be done by direct connections using 4 channels each in the form of a clique. But to connect 16 GPUs, an NVSwitch, which is an external chip dedicated just for interconnection, is required. In the case of Google TPU v2, it is designed to enable a connection of a 2D torus structure using Inter-Core Interconnect (ICI) with an aggregate bandwidth of 496 GBps.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" alt="" /></p>
<p class="source">Figure 4. NVIDIA’s GPU Accelerator A100 using 6 HBMs<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer">The Verge</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050053/220317_Figure_4._Nvidia%E2%80%99s_GPU_Accelerator_A100_using_6_HBMs.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The way in which processors are interconnected has a huge impact on the whole system. For example, if they are interconnected in a mesh or torus structure, the structure is easy to compose as the physical connection between chips is simple. But latency increases proportionally to the distance as it requires hopping over several processors to interconnect between nodes that are far away. The most extreme method would be in the form of a clique that interconnects all processors one to one, but this would lead to a significant increase in the number of chip pins by N!, causing PCB congestion beyond allowable so that in actual design, connecting up to only four processors would be the limit.</p>
<p>Most generally, using a crossbar switch like a NVSwitch is another attractive option, but this method also converges all connections on the switch. Therefore, the more the number of processors you want to interconnect, the more difficult the PCB layout becomes as transmission lines concentrate around the switch. The best method is structuring the whole network in a binary tree, connecting processors at the bottom end, and allocating the most bandwidth to the top of the binary tree. Therefore, creating a binary fat tree will be the most ideal and will be able to deliver maximum performance with scalability.</p>
<h3 class="tit">Neuromorphic AI Chip</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" alt="" /></p>
<p class="source">Figure 5. Cloud Server Processor vs. Neuromorphic AI Processor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050055/220317_Figure_5.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Data representation and processing method of processors for cloud servers that serve as DNN accelerators take the form of digital, since the computational structure is fundamentally simulation of NN through software on top of hardware. Recently, there is an increase in research on neuromorphic AI chip which, unlike the previous simulation method, directly mimics the neural network of a living organism and its signals and maps to an analog electronic circuit and performs in the same manner. This method takes the form of being analog in the representation of original data in actual applications. This means that one signal is represented in one node, and the interconnection is by hardwire and not defined by the software, while the weights are stored in an analog form.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" alt="" /></p>
<p class="source">Figure 6. Previous semiconductor vs. Neuromorphic semiconductor</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050057/220317_Figure_6.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The advantage of such structure is that it has maximum parallelism to perform with minimum energy. And neuromorphic chips can secure great advantage in certain applications. Because the structure is fixed, it lacks programmability, but it can offer a great advantage in certain edge computing applications of a small scale. In fact, neuromorphic processor has significance in applications such as processing AI signals of sensors used in IoT by delivering high energy efficiency or image classification that requires processing large amounts of video data using CNN of a fixed weight. However, because the weight is fixed, it will be difficult to use in areas of applications that require continued learning. Also, it is difficult to leverage parallelism that interconnects several chips in parallel due to a structural limitation when it comes to large-scale computations, making its actual area of application restricted to edge computing. It is also possible to realize the neuromorphic structure in a digital form, and IBM’s TrueNorth is an example. It is known, however, that the scalability is limited, making it difficult to find wide practical applications.</p>
<h3 class="tit">Current Status of AI Chip Development</h3>
<p>To create a smart digital assistant that can converse with humans, Meta (formerly known as Facebook), which needs to process massive amounts of user data, is <a class="-as-ga" style="text-decoration: underline;" href="https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://engineering.fb.com/2021/06/28/data-center-engineering/asicmon/">designing an AI chip</a> specialized to have basic knowledge about the world. The company is also internally <a class="-as-ga" style="text-decoration: underline;" href="https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.theinformation.com/articles/facebook-develops-new-machine-learning-chip">developing AI chips</a> that will perform moderation to decide whether to post real-time videos that are uploaded to Facebook.</p>
<p>Amazon, a technology company that mainly focuses on e-commerce and cloud computing, has already developed its own AI accelerator called <a class="-as-ga" style="text-decoration: underline;" href="https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://aws.amazon.com/ko/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/">AWS Inferentia</a>to power its digital assistant Alexa and uses it to recognize audio signals. Cloud service provider AWS has developed an infrastructure that uses the Inferentia chip and provides services for cloud service users that can accelerate deep learning workloads like Google’s TPU.</p>
<p>Microsoft, on the other hand, <a class="-as-ga" style="text-decoration: underline;" href="https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.cnbc.com/2018/05/07/microsoft-is-luring-a-i-developer-by-offering-them-faster-chips.html">uses field programmable gate array (FPGA) in its data centers</a> and has introduced a method of securing the best performance by reconfiguring precision and DNN structure according to application algorithms in order to create AI chips optimized not only in current applications, but also in future applications. This method, however, creates a lot of overhead to refigure the structure and logic circuit even if it has identified an optimal structure. As a result, it is unclear that it will have actual benefit because it is inevitably disadvantaged in terms of energy and performance compared to ASIC chips specifically designed for certain purposes.</p>
<p>A number of fabless startups are competing against NVIDIA by developing general-purpose programmable accelerators that are not specialized to certain areas of application. Many companies, including Cerebras Systems, Graphcore, and Groq, are joining the fierce competition. In Korea, SK Telecom, in collaboration with SK hynix, has developed SAPEON and will soon be used as the AI chip in data centers. And Furiosa AI is preparing to commercialize its silicon chip, Warboy, as well.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" alt="" /></p>
<p class="source">Figure 7. SAPEON X220<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.sktelecom.com/en/press/press_detail.do?idx=1492" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.sktelecom.com/en/press/press_detail.do?idx=1492">SK Telecom Press Release</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050059/220317_Figure_7._SAPEON_X220.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<h3 class="tit">The Importance of the Compiler</h3>
<p>The performance of such AI hardware depends greatly on how optimized its software is. Operating thousands or tens of thousands of computational circuits at the same time through systolic array and gathering the outcome efficiently require highly advanced coordination. Setting up the order of the input data to feed numerous computational circuits in the AI chip and make them to work continuously in a lockstep and then transmitting the output to the next stage can only be done through a specialized library. This means that developing an efficient library and the compiler to use them is as important as designing the hardware.</p>
<p>NVIDIA GPU started as a graphics engine. But NVIDIA provided a development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://developer.nvidia.com/cuda-toolkit" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://developer.nvidia.com/cuda-toolkit">CUDA</a>, , to enable users to write programs easily and enabled them to run efficiently on the GPU, which made it popularly and commonly used across the AI community. Google also provides its own development environment, <a class="-as-ga" style="text-decoration: underline;" href="https://www.tensorflow.org/learn" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.tensorflow.org/learn">TensorFlow</a>, to help develop software using TPUs. As a result, it supports users to utilize TPU easily. More and more diverse development environments must be provided in the future, which will increase the applicability of AI chips.</p>
<h3 class="tit">AI Chip and its Energy Consumption</h3>
<p>TThe direction of AI services in the future must absolutely focus on enhancing the quality of service and reducing the required energy consumption. Therefore, it is expected that efforts will focus around reducing power consumption of AI chips and accelerating the development of energy-saving DNN structure. In fact, it is known that it takes 10^19 floating-point arithmetic in the training of ImageNet to reduce error rate to less than 5%. This is the equivalent to the amount of energy consumed by New York City citizens for a month. In the example of <a class="-as-ga" style="text-decoration: underline;" href="https://deepmind.com/research/case-studies/alphago-the-story-so-far" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://deepmind.com/research/case-studies/alphago-the-story-so-far">AlphaGo</a> that was used in the game of Go against 9-Dan professional player Lee Sedol in 2016, <a class="-as-ga" style="text-decoration: underline;" href="https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.businessinsider.com/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3">a total of 1,202 CPUs and 176 GPUs were used</a> in the inference to play Go and estimated 1 MW in power consumption, which is tremendous compared with the human brain using only 20 W.</p>
<p>AlphaGo Zero, which was developed later, became a system of a performance that exceeds AlphaGo merely after 72 hours of training using self-play reinforcement learning with only 4 TPUs. This case proves that there is potential in reducing energy consumption using a new neural network structure and a learning method. And we must continue to pursue research and development on energy-saving DNN structures.</p>
<h3 class="tit">The Future of the AI Semiconductor Market</h3>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" alt="" /></p>
<p class="source">Figure 8. AI Chip Market Outlook<br />
(Source: <a class="-as-ga" style="text-decoration: underline;" href="https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://www.statista.com/statistics/1283358/artificial-intelligence-chip-market-size/">Statista</a>)</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050101/220317_Figure_8.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>The successful accomplishments made in the field of AI will expand the scope of application, triggering stunning market growth as well. For example, SK hynix recently developed a next-generation intelligence semiconductor memory, or processing-in-memory (PIM)<sup>6</sup>, to resolve the bottleneck issue in data access in AI and big data processing. SK hynix unveiled the <a class="-as-ga" style="text-decoration: underline;" href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer" data-ga-category="sk-hynix-newsroom" data-ga-action="click" data-ga-label="goto_https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/">‘GDDR6-AiM (Accelerator in Memory)’ sample</a> as the first product to apply the PIM, and announced the achievement of its PIM development at the International Solid-State Circuits Conference, ISSCC 2022<sup>7</sup>, an international conference of the highest authority in the field of semiconductor, held in San Francisco in the end of February this year.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" alt="" /></p>
<p class="source">Figure 9. GDDR6-AiM developed by SK hynix</p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/03/17050035/220317_Figure_9._%E2%80%98GDDR6-AiM%E2%80%99_of_SK_hynix.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- // 이미지 사이즈 지정해서 업로드 --></p>
<p>Application systems will further drive a wider AI market and continuously create new areas, enabling differentiated service quality backed by the quality of inference based on a structure of neural network. AI semiconductors, which are the backbone of the AI system, will be differentiated based on how fast and accurately they can conduct inference and training tasks using low energy. Latest research findings show that energy efficiency per se is extremely poor. Therefore, there is an increasing need for research on new neural network structures with a focus not only on function, but also on energy efficiency. And in terms of hardware, the core element that defines energy efficiency lies around improving memory access methods. As such, Processing-In-Memory (PIM), which processes within a memory and not by accessing memory separately, and neuromorphic computing that mimics the neural network by storing synapse weights in analog memory will become important fields of research.</p>
<p><!-- 각주 스타일 --></p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p><!--<strong>[Reference]</strong>--></p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data.<br />
<sup>2</sup>Convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.<br />
<sup>3</sup>Homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrypting it.<br />
<sup>4</sup>A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.<br />
<sup>5</sup>An accelerator is a special-purpose hardware made using processing and computation chips.<br />
<sup>6</sup>Processing in memory (PIM, sometimes called processor in memory) is the next-generation technology that provides a solution for data congestion issues for AI and big data by adding computational functions to semiconductor memory. The product based on such technology is sometimes known as a PIM chip.<br />
<sup>7</sup>The International Solid-State Circuits Conference was held virtually from Feb. 20 to Feb. 28 this year with a theme of “Intelligent Silicon for a Sustainable World.</p>
<p><!-- //각주 스타일 --></p>
<p><!-- 기고문 스타일 --></p>
<p><!-- namecard --></p>
<div class="namecard">
<p><img decoding="async" class="alignnone size-full wp-image-3446" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/02/18062629/Dong_kyoon_Jeong.png" alt="" /></p>
<div class="name">
<p class="tit">By<strong>Deog-kyoon Jeong, Ph.D.</strong></p>
<p><span class="sub">Professor<br />
Electrical &amp; Computer Engineering<br />
Seoul National University(SNU) College of Engineering<br />
</span></p>
</div>
</div>
<p><!-- //기고문 스타일 --></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/various-applications-of-ai-technology/">The Present and Future of AI Semiconductor (2): Various Applications of AI Technology</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
