<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Supercomputing 2023 - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/supercomputing-2023/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Tue, 05 Dec 2023 12:07:28 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>SK hynix Debuts at SC23 to Showcase Next-Gen AI &#038; HPC Solutions</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 20 Nov 2023 00:00:57 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[eSSD]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[HBM3E]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[SC23]]></category>
		<category><![CDATA[Supercomputing 2023]]></category>
		<category><![CDATA[High-performance computing]]></category>
		<category><![CDATA[OCS]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=13586</guid>

					<description><![CDATA[<p>SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12–17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/">SK hynix Debuts at SC23 to Showcase Next-Gen AI & HPC Solutions</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12–17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company’s memory solutions on AI and HPC.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-13587 size-full" title="SK hynix’s exhibition booth at SC23" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01.png" alt="SK hynix’s exhibition booth at SC23" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 1 . SK hynix’s exhibition booth at SC23</p>
<p>&nbsp;</p>
<h3 class="tit">Displaying Advanced HPC &amp; AI Products</h3>
<p>At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, <a href="https://news.skhynix.com/sk-hynix-develops-worlds-best-performing-hbm3e/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">HBM3E</span></a> attracted attention as the HBM<sup>1</sup> solution meets the industry’s highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside <a href="https://news.skhynix.com/sk-hynix-to-supply-industrys-first-hbm3-dram-to-nvidia/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">NVIDIA’s H100</span></a>, a high-performance GPU for AI that uses HBM3 for its memory.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>High Bandwidth Memory (HBM)</strong>: A high-value, high-performance product that possesses much higher data processing speeds compared to existing DRAMs by vertically connecting multiple DRAMs with through-silicon via (TSV).</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054402/SK-hynix_SC23_02.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054350/SK-hynix_SC23_03.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 2. HBM3E, an HBM solution applicable to data-intensive AI server systems</p>
<p>&nbsp;</p>
<p>SK hynix also held a demonstration of AiMX<sup>2</sup>, the company’s generative AI accelerator<sup>3</sup> card which specializes in large language models (LLM) using <a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">GDDR6-AiM</span></a> chips that leverage PIM<sup>4</sup> technology. This product is set to play a key role in the advancement of data-intensive generative AI inference systems as it significantly reduces the AI inference time of server systems compared to systems with GPUs, while also offering lower power consumption.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>2</sup><strong>Accelerator-in-Memory Based Accelerator (AiMX)</strong>: SK hynix’s accelerator card product that specializes in large language models (AI that learns with large amounts of text data such as ChatGPT) using GDDR6-AiM chips.<br />
<sup>3</sup><strong>Accelerator</strong>: A special-purpose hardware device that uses a chip designed specifically for processing and computing information.<br />
<sup>4</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that adds computational capabilities to semiconductor memories to solve the problem of data movement congestion in AI and big data processing.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054340/SK-hynix_SC23_04.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054328/SK-hynix_SC23_05.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 3. SK hynix’s booth providing a demonstration of AiMX, an AI accelerator card</p>
<p>&nbsp;</p>
<p><a href="https://news.skhynix.com/sk-hynix-develops-ddr5-dram-cxltm-memory-to-expand-the-cxl-memory-ecosystem/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">CXL</span></a><sup>5</sup> was another highlight at SK hynix’s booth. Based on PCle<sup>6</sup>, CXL is a standardized interface that helps increase the efficiency of HPC systems. Offering flexible memory expansion, CXL is a promising interface for HPC systems such as AI and big data-related applications. In particular, SK hynix’s Niagara CXL disaggregated memory prototype platform was showcased as a pooled memory solution that can improve system performance in AI and big data distributed processing systems.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Compute Express Link (CXL)</strong>: A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based.<br />
<sup>6</sup><strong>Peripheral Component Interconnect Express (PCIe)</strong>: A high-speed input/output series interface used in the mainboard of digital devices. PCIe’s data-transfer speed doubles in accordance with a generation shift.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054317/SK-hynix_SC23_06.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054304/SK-hynix_SC23_07.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054253/SK-hynix_SC23_08.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 4. SK hynix’s CXL pooled memory solution, Niagara, and CXL-based computational memory solution (CMS)</p>
<p>&nbsp;</p>
<p>Additionally, SK hynix was able to present the results of its collaboration with Los Alamos National Laboratory (LANL) to improve the performance and reduce the energy requirements of applications that utilize HPC physics. Called CXL-based <a href="https://news.skhynix.com/sk-hynix-introduces-industrys-first-cxl-based-cms-at-the-ocp-global-summit/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">computational memory solution (CMS)</span></a><sup>7</sup>, the product has the capability to accelerate indirect memory accesses while also significantly reducing data movement. Such technological enhancements are also applicable to various memory-intensive domains such as AI and graph analytics.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>7</sup><strong>Computational Memory Solution (CMS)</strong>:  A memory solution that offers the functions of machine learning and data filtering, which are frequently performed by big data analytics applications. Just like CXL, CMS’s memory capacity is highly scalable.</p>
<p>Lastly, object-based computational storage (OCS) was shown as part of SK hynix’s efforts to develop an analytics ecosystem with multiple partners. It minimizes data movement between analytics application systems and storage, reduces storage software stack weight, and accelerates data analysis speed. And through a demonstration, the company showed how its interface technology enhances data processing capabilities in OCS.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13593 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 5. Object-based computational storage (OCS) was developed as part of SK hynix’s efforts to form an analytics ecosystem</p>
<p>&nbsp;</p>
<h3 class="tit">Innovative Data Center &amp; eSSD Solutions</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13592 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 6. A presentation of data center solutions including MCR DIMM, PS1010, and PS1030</p>
<p>&nbsp;</p>
<p>SK hynix also displayed a range of its data center solutions at the conference, including its DDR5 Registered Dual In-line Memory Module (RDIMM). Equipped with 1bnm, the fifth generation of the 10nm process technology, DDR5 RDIMM reaches speeds of up to 6,400 megabits per second (Mbps). The display also featured DDR5 Multiplexer Combined Ranks (MCR) DIMM, which reaches speeds of up to 8,800 Mbps. With such rapid speeds, these DDR5 solutions are suited for AI computing in high-performance servers.</p>
<p>Visitors to the SK hynix booth could also see its latest enterprise SSD (eSSD) products, including the PCle Gen5-based PS1010 E3.S and PS1030. In particular, the PS1030 delivers the industry’s fastest sequential read speed of 14,800 megabytes per second (MBps) which makes it ideal for big data and machine learning.</p>
<h3 class="tit">Sharing the Potential of SK hynix’s Memory Solutions</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13590 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 7. Technical Leader Yongkee Kwon presents about LLM inference solution using SK hynix’s AiM</p>
<p>&nbsp;</p>
<p>During the conference, SK hynix employees also held presentations on the application of the company’s memory solutions for AI and HPC. On the fourth day of the conference, Technical Leader of PIM Hardware in Solution Advanced Technology Division Yongkee Kwon held a talk titled “Cost-Effective Large Language Model (LLM) Inference Solution Using SK hynix’s AiM.” Kwon revealed how SK hynix’s AiM, a PIM device that is specialized for LLMs, can significantly improve the performance and energy efficiency of LLM inference. When applied to Meta’s Open Pre-trained Transformers (OPT) language model, an open-source alternative to Open AI&#8217;s GPT-3, AiM can reach speeds up to ten times higher than state-of-the-art GPU systems while also offering lower costs and energy consumption.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13591 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 8. Technical Leader Hokyoon Lee talks about the innovative solutions offered by SK hynix’s CXL solution, Niagara</p>
<p>&nbsp;</p>
<p>On the same day, Technical Leader of System Software in Memory Forest x&amp;D Hokyoon Lee held a presentation titled “CXL-based Memory Disaggregation for HPC and AI Workloads.” SK hynix’s Niagara addresses the issue of stranded memory—or unused memory in each server that can never be utilized by other servers—with its elastic memory<sup>8</sup> feature. Additionally, Niagara’s memory sharing feature provides a solution to heavy network traffic in conventional distributed computing. In the session, the presenters demonstrated the effectiveness of memory sharing during a real simulation with the Ray distributed AI framework which is used in ChatGPT.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>8</sup><strong>Elastic memory</strong>: A type of memory with capacity that can be adjusted as required by servers when multiple hosts share memory, such as pooled memory.</p>
<p>A day later, Director and Technical Leader of SOLAB in Memory Forest x&amp;D Jongryool Kim presented on “Accelerating Data Analytics Using Object Based Computational Storage in an HPC.” By introducing SK hynix’s collaboration with LANL in researching computational storage technologies, Kim proposed object-based computational storage (OCS) as a new computational storage platform for data analytics in HPC. Due to its high scalability and data-aware characteristics, OCS can perform analytics independently without help from compute nodes—highlighting its potential as the future of computational storage in HPC.</p>
<p>Similarly, SK hynix will continue to develop solutions that enable the advancement of AI and HPC as a globally leading AI memory provider.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13589 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 9. Technical Leader Jongryool Kim explains how SK hynix participated in collective research into computational storage technologies</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/">SK hynix Debuts at SC23 to Showcase Next-Gen AI & HPC Solutions</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
