<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>High-performance computing - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/high-performance-computing/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Mon, 10 Feb 2025 13:36:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>SK hynix Presents Innovative AI &#038; HPC Solutions at SC24</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 21 Nov 2024 01:00:07 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Supercomputing 2024]]></category>
		<category><![CDATA[SC24]]></category>
		<category><![CDATA[OCS]]></category>
		<category><![CDATA[High-performance computing]]></category>
		<category><![CDATA[AI Memory]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[HBM3E]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[eSSD]]></category>
		<guid isPermaLink="false">https://skhynix-news-global-stg.mock.pe.kr/?p=16873</guid>

					<description><![CDATA[<p>SK hynix is showcasing its advanced memory solutions for AI and high-performance computing (HPC) at Supercomputing 2024 (SC24) in Atlanta, the U.S., held from November 17–22. This annual event, organized by the Association for Computing Machinery and the IEEE Computer Society since 1988, features the latest developments in HPC, networking, storage, and data analysis. SK [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/">SK hynix Presents Innovative AI & HPC Solutions at SC24</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix is showcasing its advanced memory solutions for AI and high-performance computing (HPC) at Supercomputing 2024 (SC24) in Atlanta, the U.S., held from November 17–22. This annual event, organized by the Association for Computing Machinery and the IEEE Computer Society since 1988, features the latest developments in HPC, networking, storage, and data analysis.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s booth at SC24" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132429/SK-hynix_SC24_01-1.png " alt="SK hynix’s booth at SC24" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s booth at SC24" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132439/SK-hynix_SC24_02.png" alt="SK hynix’s booth at SC24" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix’s booth at SC24</p>
<p>&nbsp;</p>
<p>Returning for its second year, SK hynix is underlining its AI memory leadership through a display of innovative memory products and insightful presentations on AI and HPC technologies. In line with the conference’s “HPC Creates” theme which underscores the impact of supercomputing across various industries, the company is showing how its memory solutions drive progress in diverse fields.</p>
<h3 class="tit">Showcasing Advanced Memory Solutions for AI &amp; HPC</h3>
<p>At the booth, SK hynix is demonstrating and displaying a range of groundbreaking products tailored for AI and HPC. The products being demonstrated include its CMM (CXL<sup>®1</sup> Memory Module)-DDR5<sup>2</sup>,  AiMX<sup>3</sup> accelerator card, and Niagara 2.0 among others.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Compute Express Link<sup>®</sup> (CXL<sup>®</sup>)</strong>: A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based.<br />
<sup>2</sup><strong>CXL Memory Module-DDR5 (CMM-DDR5)</strong>: A next-generation DDR5 memory module utilizing CXL technology to boost bandwidth and performance for AI, cloud, and high-performance computing.<br />
<sup>3</sup><strong>Accelerator-in-Memory Based Accelerator (AiMX)</strong>: SK hynix&#8217;s specialized accelerator card tailored for large language model processing using GDDR6-AiM chips.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132452/SK-hynix_SC24_03.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132507/SK-hynix_SC24_04.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132532/SK-hynix_SC24_06.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Live demonstrations of CMM-DDR5 and AiMX at the booth</p>
<p>&nbsp;</p>
<p>The live demonstration of CMM-DDR5 with a server platform featuring Intel<sup>®</sup> Xeon<sup>®</sup> 6 processors shows how CXL<sup>®</sup> memory technology accelerates AI workloads under various usage models. Moreover, visitors to the booth can learn about the latest CMM-DDR5 product with EDSFF<sup>4</sup> which offers improvements in TCO<sup>5</sup> and performance. Another live demonstration features AiMX integrated in an ASRock Rack Server to run Meta’s Llama 3 70B, a large language model (LLM) with 70 billion parameters. This demonstration highlights AiMX’s efficiency in processing large datasets while achieving high performance and low power consumption, addressing the computational load challenges posed by attention layers<sup>6</sup> in LLMs.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>4</sup><strong>Enterprise and Data Center Standard Form Factor (EDSFF)</strong>: A collection of SSD form factors specifically used for data center servers.<br />
<sup>5</sup><strong>Total cost of ownership (TCO)</strong>: The complete cost of acquiring, operating, and maintaining an asset, including purchase, energy, and maintenance expenses.<br />
<sup>6</sup><strong>Attention layer</strong>: A mechanism that enables a model to assess the relevance of input data, prioritizing more important information for processing.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132207/SK-hynix_SC24_07.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132220/SK-hynix_SC24_08.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132234/SK-hynix_SC24_09.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132521/SK-hynix_SC24_05.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132249/SK-hynix_SC24_10.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products</p>
<p>&nbsp;</p>
<p>Among the other technologies being demonstrated is Niagara 2.0. The CXL pooled memory solution enables data sharing to minimize GPU memory shortages during AI inference<sup>7</sup>, making it ideal for LLM models. The company is also demonstrating an HBM with near-memory processing (NMP)<sup>8</sup> which accelerates indirect memory access<sup>9</sup>, a frequent occurrence in HPC. Developed with Los Alamos National Laboratory (LANL), the solution highlights the potential of NMP-enabled HBM to advance next-generation technologies.</p>
<p>Another demonstration is showcasing SK hynix’s updated OCS<sup>10</sup> solution, which offers significant improvements in analytical performance for real-world HPC workloads compared to the iteration <a href="https://news.skhynix.com/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/"><span style="text-decoration: underline;">displayed at SC23</span></a>. Co-developed with LANL, OCS addresses performance issues in traditional HPC systems by enabling storage to independently analyze data, reducing unnecessary data movement and improving resource efficiency. Additionally, the company is demonstrating a checkpoint offloading SSD<sup>11</sup> prototype that improves LLM training resource utilization by enhancing performance and scalability.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>7</sup><strong>AI inference</strong>: The process of using a trained AI model to analyze live data for predictions or task completions.<br />
<sup>8</sup><strong>Near-memory processing (NMP)</strong>: A technique that performs computations near data storage, reducing latency and boosting performance in high-bandwidth tasks like AI and HPC.<br />
<sup>9</sup><strong>Indirect memory access</strong>: A computing addressing method in which an instruction providing the address of a memory location that contains the actual address of the desired data or instruction.<br />
<sup>10</sup><strong>Object-based computational storage (OCS)</strong>: A storage architecture that integrates computation within the storage system, enabling local data processing and minimizing movement to enhance analytical efficiency.<br />
<sup>11</sup><strong>Checkpoint offloading SSD</strong>: A storage solution that stores intermediate data during AI training, improving efficiency and reducing training time.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132258/SK-hynix_SC24_11.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132316/SK-hynix_SC24_12.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132333/SK-hynix_SC24_13.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132349/SK-hynix_SC24_14.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products</p>
<p><strong> </strong></p>
<p>In addition to running product demonstrations, SK hynix is also displaying a robust lineup of data center solutions, including its industry-leading HBM3E<sup>12</sup>. The fifth-generation HBM provides high-speed data processing, optimal heat dissipation, and high capacity, making it essential for AI applications. Alongside HBM3E are the company’s rapid DDR5 RDIMM and MCR DIMM products, which are tailored for AI computing in high-performance servers. Enterprise SSDs (eSSDs) including the Gen 5 PS1010 and PEB110 are also on display. Offering ultra-fast read/write speeds, these SSD solutions are vital for accelerating AI training and inference in large-scale environments.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>12</sup><strong>HBM3E</strong>: The fifth-generation High Bandwidth Memory (HBM), a high-value, high-performance product that revolutionizes data processing speeds by connecting multiple DRAM chips with through-silicon via (TSV).</p>
<h3 class="tit">Highlighting the Potential of Memory Through Expert Presentations</h3>
<p>During the conference, Jongryool Kim, research director of AI System Infra, presented on “Memory &amp; Storage: The Power of HPC/AI,” highlighting the memory needs for HPC and AI systems. He focused on two key advancements including near-data processing technology using CXL, HBM, and SSDs to improve performance, and CXL pooled memory for better data sharing across systems.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132400/SK-hynix_SC24_15.png" alt="Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Technical Leader Jeoungahn Park delivering a presentation on OCS" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132409/SK-hynix_SC24_16.png" alt="Technical Leader Jeoungahn Park delivering a presentation on OCS" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">(From first image) Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems; Technical Leader Jeoungahn Park delivering a presentation on OCS</p>
<p>&nbsp;</p>
<p>Technical Leader Jeoungahn Park of the Sustainable Computing team also took to the stage for his talk on “Leveraging Open Standardized OCS to Boost HPC Data Analytics.” Park explained how OCS enables storage to automatically recognize and analyze data, thereby accelerating data analysis in HPC. He added how OCS enhances resource efficiency and integrates seamlessly with existing analytics systems, as well as how its analysis performance has been verified in real-world HPC applications.</p>
<p>At SC24, SK hynix is solidifying its status as a pioneer in memory solutions which are driving innovations in AI and HPC technologies. Looking ahead, the company will continue to push technological boundaries with support from its partners to shape the future of AI and HPC.</p>
<p>&nbsp;</p>
<p><a href="https://linkedin.com/showcase/skhynix-news-and-stories/" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="size-full wp-image-15776 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10074354/SK-hynix_Newsroom-banner_1.png" alt="" width="800" height="135" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-680x115.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-768x130.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/">SK hynix Presents Innovative AI & HPC Solutions at SC24</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Debuts at SC23 to Showcase Next-Gen AI &#038; HPC Solutions</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 20 Nov 2023 00:00:57 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[eSSD]]></category>
		<category><![CDATA[OCS]]></category>
		<category><![CDATA[High-performance computing]]></category>
		<category><![CDATA[Supercomputing 2023]]></category>
		<category><![CDATA[SC23]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[HBM3E]]></category>
		<category><![CDATA[CXL]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=13586</guid>

					<description><![CDATA[<p>SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12–17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/">SK hynix Debuts at SC23 to Showcase Next-Gen AI & HPC Solutions</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12–17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company’s memory solutions on AI and HPC.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-13587 size-full" title="SK hynix’s exhibition booth at SC23" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01.png" alt="SK hynix’s exhibition booth at SC23" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054140/SK-hynix_SC23_01-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 1 . SK hynix’s exhibition booth at SC23</p>
<p>&nbsp;</p>
<h3 class="tit">Displaying Advanced HPC &amp; AI Products</h3>
<p>At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, <a href="https://news.skhynix.com/sk-hynix-develops-worlds-best-performing-hbm3e/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">HBM3E</span></a> attracted attention as the HBM<sup>1</sup> solution meets the industry’s highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside <a href="https://news.skhynix.com/sk-hynix-to-supply-industrys-first-hbm3-dram-to-nvidia/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">NVIDIA’s H100</span></a>, a high-performance GPU for AI that uses HBM3 for its memory.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>High Bandwidth Memory (HBM)</strong>: A high-value, high-performance product that possesses much higher data processing speeds compared to existing DRAMs by vertically connecting multiple DRAMs with through-silicon via (TSV).</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054402/SK-hynix_SC23_02.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054350/SK-hynix_SC23_03.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 2. HBM3E, an HBM solution applicable to data-intensive AI server systems</p>
<p>&nbsp;</p>
<p>SK hynix also held a demonstration of AiMX<sup>2</sup>, the company’s generative AI accelerator<sup>3</sup> card which specializes in large language models (LLM) using <a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">GDDR6-AiM</span></a> chips that leverage PIM<sup>4</sup> technology. This product is set to play a key role in the advancement of data-intensive generative AI inference systems as it significantly reduces the AI inference time of server systems compared to systems with GPUs, while also offering lower power consumption.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>2</sup><strong>Accelerator-in-Memory Based Accelerator (AiMX)</strong>: SK hynix’s accelerator card product that specializes in large language models (AI that learns with large amounts of text data such as ChatGPT) using GDDR6-AiM chips.<br />
<sup>3</sup><strong>Accelerator</strong>: A special-purpose hardware device that uses a chip designed specifically for processing and computing information.<br />
<sup>4</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that adds computational capabilities to semiconductor memories to solve the problem of data movement congestion in AI and big data processing.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054340/SK-hynix_SC23_04.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054328/SK-hynix_SC23_05.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 3. SK hynix’s booth providing a demonstration of AiMX, an AI accelerator card</p>
<p>&nbsp;</p>
<p><a href="https://news.skhynix.com/sk-hynix-develops-ddr5-dram-cxltm-memory-to-expand-the-cxl-memory-ecosystem/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">CXL</span></a><sup>5</sup> was another highlight at SK hynix’s booth. Based on PCle<sup>6</sup>, CXL is a standardized interface that helps increase the efficiency of HPC systems. Offering flexible memory expansion, CXL is a promising interface for HPC systems such as AI and big data-related applications. In particular, SK hynix’s Niagara CXL disaggregated memory prototype platform was showcased as a pooled memory solution that can improve system performance in AI and big data distributed processing systems.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Compute Express Link (CXL)</strong>: A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based.<br />
<sup>6</sup><strong>Peripheral Component Interconnect Express (PCIe)</strong>: A high-speed input/output series interface used in the mainboard of digital devices. PCIe’s data-transfer speed doubles in accordance with a generation shift.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054317/SK-hynix_SC23_06.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054304/SK-hynix_SC23_07.png" alt="" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054253/SK-hynix_SC23_08.png" alt="" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 4. SK hynix’s CXL pooled memory solution, Niagara, and CXL-based computational memory solution (CMS)</p>
<p>&nbsp;</p>
<p>Additionally, SK hynix was able to present the results of its collaboration with Los Alamos National Laboratory (LANL) to improve the performance and reduce the energy requirements of applications that utilize HPC physics. Called CXL-based <a href="https://news.skhynix.com/sk-hynix-introduces-industrys-first-cxl-based-cms-at-the-ocp-global-summit/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">computational memory solution (CMS)</span></a><sup>7</sup>, the product has the capability to accelerate indirect memory accesses while also significantly reducing data movement. Such technological enhancements are also applicable to various memory-intensive domains such as AI and graph analytics.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>7</sup><strong>Computational Memory Solution (CMS)</strong>:  A memory solution that offers the functions of machine learning and data filtering, which are frequently performed by big data analytics applications. Just like CXL, CMS’s memory capacity is highly scalable.</p>
<p>Lastly, object-based computational storage (OCS) was shown as part of SK hynix’s efforts to develop an analytics ecosystem with multiple partners. It minimizes data movement between analytics application systems and storage, reduces storage software stack weight, and accelerates data analysis speed. And through a demonstration, the company showed how its interface technology enhances data processing capabilities in OCS.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13593 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054242/SK-hynix_SC23_09-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 5. Object-based computational storage (OCS) was developed as part of SK hynix’s efforts to form an analytics ecosystem</p>
<p>&nbsp;</p>
<h3 class="tit">Innovative Data Center &amp; eSSD Solutions</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13592 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054230/SK-hynix_SC23_10-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 6. A presentation of data center solutions including MCR DIMM, PS1010, and PS1030</p>
<p>&nbsp;</p>
<p>SK hynix also displayed a range of its data center solutions at the conference, including its DDR5 Registered Dual In-line Memory Module (RDIMM). Equipped with 1bnm, the fifth generation of the 10nm process technology, DDR5 RDIMM reaches speeds of up to 6,400 megabits per second (Mbps). The display also featured DDR5 Multiplexer Combined Ranks (MCR) DIMM, which reaches speeds of up to 8,800 Mbps. With such rapid speeds, these DDR5 solutions are suited for AI computing in high-performance servers.</p>
<p>Visitors to the SK hynix booth could also see its latest enterprise SSD (eSSD) products, including the PCle Gen5-based PS1010 E3.S and PS1030. In particular, the PS1030 delivers the industry’s fastest sequential read speed of 14,800 megabytes per second (MBps) which makes it ideal for big data and machine learning.</p>
<h3 class="tit">Sharing the Potential of SK hynix’s Memory Solutions</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13590 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054205/SK-hynix_SC23_11-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 7. Technical Leader Yongkee Kwon presents about LLM inference solution using SK hynix’s AiM</p>
<p>&nbsp;</p>
<p>During the conference, SK hynix employees also held presentations on the application of the company’s memory solutions for AI and HPC. On the fourth day of the conference, Technical Leader of PIM Hardware in Solution Advanced Technology Division Yongkee Kwon held a talk titled “Cost-Effective Large Language Model (LLM) Inference Solution Using SK hynix’s AiM.” Kwon revealed how SK hynix’s AiM, a PIM device that is specialized for LLMs, can significantly improve the performance and energy efficiency of LLM inference. When applied to Meta’s Open Pre-trained Transformers (OPT) language model, an open-source alternative to Open AI&#8217;s GPT-3, AiM can reach speeds up to ten times higher than state-of-the-art GPU systems while also offering lower costs and energy consumption.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13591 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054218/SK-hynix_SC23_12-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 8. Technical Leader Hokyoon Lee talks about the innovative solutions offered by SK hynix’s CXL solution, Niagara</p>
<p>&nbsp;</p>
<p>On the same day, Technical Leader of System Software in Memory Forest x&amp;D Hokyoon Lee held a presentation titled “CXL-based Memory Disaggregation for HPC and AI Workloads.” SK hynix’s Niagara addresses the issue of stranded memory—or unused memory in each server that can never be utilized by other servers—with its elastic memory<sup>8</sup> feature. Additionally, Niagara’s memory sharing feature provides a solution to heavy network traffic in conventional distributed computing. In the session, the presenters demonstrated the effectiveness of memory sharing during a real simulation with the Ray distributed AI framework which is used in ChatGPT.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>8</sup><strong>Elastic memory</strong>: A type of memory with capacity that can be adjusted as required by servers when multiple hosts share memory, such as pooled memory.</p>
<p>A day later, Director and Technical Leader of SOLAB in Memory Forest x&amp;D Jongryool Kim presented on “Accelerating Data Analytics Using Object Based Computational Storage in an HPC.” By introducing SK hynix’s collaboration with LANL in researching computational storage technologies, Kim proposed object-based computational storage (OCS) as a new computational storage platform for data analytics in HPC. Due to its high scalability and data-aware characteristics, OCS can perform analytics independently without help from compute nodes—highlighting its potential as the future of computational storage in HPC.</p>
<p>Similarly, SK hynix will continue to develop solutions that enable the advancement of AI and HPC as a globally leading AI memory provider.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13589 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13.png" alt="" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/11/17054155/SK-hynix_SC23_13-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 9. Technical Leader Jongryool Kim explains how SK hynix participated in collective research into computational storage technologies</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/">SK hynix Debuts at SC23 to Showcase Next-Gen AI & HPC Solutions</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
