<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>artificial intelligence - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Mon, 10 Feb 2025 13:36:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>SK hynix Presents Innovative AI &#038; HPC Solutions at SC24</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 21 Nov 2024 01:00:07 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[Supercomputing 2024]]></category>
		<category><![CDATA[SC24]]></category>
		<category><![CDATA[OCS]]></category>
		<category><![CDATA[High-performance computing]]></category>
		<category><![CDATA[AI Memory]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[HBM3E]]></category>
		<category><![CDATA[CXL]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[eSSD]]></category>
		<guid isPermaLink="false">https://skhynix-news-global-stg.mock.pe.kr/?p=16873</guid>

					<description><![CDATA[<p>SK hynix is showcasing its advanced memory solutions for AI and high-performance computing (HPC) at Supercomputing 2024 (SC24) in Atlanta, the U.S., held from November 17–22. This annual event, organized by the Association for Computing Machinery and the IEEE Computer Society since 1988, features the latest developments in HPC, networking, storage, and data analysis. SK [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/">SK hynix Presents Innovative AI & HPC Solutions at SC24</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix is showcasing its advanced memory solutions for AI and high-performance computing (HPC) at Supercomputing 2024 (SC24) in Atlanta, the U.S., held from November 17–22. This annual event, organized by the Association for Computing Machinery and the IEEE Computer Society since 1988, features the latest developments in HPC, networking, storage, and data analysis.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s booth at SC24" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132429/SK-hynix_SC24_01-1.png " alt="SK hynix’s booth at SC24" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s booth at SC24" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132439/SK-hynix_SC24_02.png" alt="SK hynix’s booth at SC24" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix’s booth at SC24</p>
<p>&nbsp;</p>
<p>Returning for its second year, SK hynix is underlining its AI memory leadership through a display of innovative memory products and insightful presentations on AI and HPC technologies. In line with the conference’s “HPC Creates” theme which underscores the impact of supercomputing across various industries, the company is showing how its memory solutions drive progress in diverse fields.</p>
<h3 class="tit">Showcasing Advanced Memory Solutions for AI &amp; HPC</h3>
<p>At the booth, SK hynix is demonstrating and displaying a range of groundbreaking products tailored for AI and HPC. The products being demonstrated include its CMM (CXL<sup>®1</sup> Memory Module)-DDR5<sup>2</sup>,  AiMX<sup>3</sup> accelerator card, and Niagara 2.0 among others.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Compute Express Link<sup>®</sup> (CXL<sup>®</sup>)</strong>: A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based.<br />
<sup>2</sup><strong>CXL Memory Module-DDR5 (CMM-DDR5)</strong>: A next-generation DDR5 memory module utilizing CXL technology to boost bandwidth and performance for AI, cloud, and high-performance computing.<br />
<sup>3</sup><strong>Accelerator-in-Memory Based Accelerator (AiMX)</strong>: SK hynix&#8217;s specialized accelerator card tailored for large language model processing using GDDR6-AiM chips.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132452/SK-hynix_SC24_03.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132507/SK-hynix_SC24_04.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Live demonstrations of CMM-DDR5 and AiMX at the booth" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132532/SK-hynix_SC24_06.png" alt="Live demonstrations of CMM-DDR5 and AiMX at the booth" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Live demonstrations of CMM-DDR5 and AiMX at the booth</p>
<p>&nbsp;</p>
<p>The live demonstration of CMM-DDR5 with a server platform featuring Intel<sup>®</sup> Xeon<sup>®</sup> 6 processors shows how CXL<sup>®</sup> memory technology accelerates AI workloads under various usage models. Moreover, visitors to the booth can learn about the latest CMM-DDR5 product with EDSFF<sup>4</sup> which offers improvements in TCO<sup>5</sup> and performance. Another live demonstration features AiMX integrated in an ASRock Rack Server to run Meta’s Llama 3 70B, a large language model (LLM) with 70 billion parameters. This demonstration highlights AiMX’s efficiency in processing large datasets while achieving high performance and low power consumption, addressing the computational load challenges posed by attention layers<sup>6</sup> in LLMs.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>4</sup><strong>Enterprise and Data Center Standard Form Factor (EDSFF)</strong>: A collection of SSD form factors specifically used for data center servers.<br />
<sup>5</sup><strong>Total cost of ownership (TCO)</strong>: The complete cost of acquiring, operating, and maintaining an asset, including purchase, energy, and maintenance expenses.<br />
<sup>6</sup><strong>Attention layer</strong>: A mechanism that enables a model to assess the relevance of input data, prioritizing more important information for processing.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132207/SK-hynix_SC24_07.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132220/SK-hynix_SC24_08.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132234/SK-hynix_SC24_09.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132521/SK-hynix_SC24_05.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132249/SK-hynix_SC24_10.png" alt="SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix’s Niagara 2.0, customized HBM, OCS, and SSD products</p>
<p>&nbsp;</p>
<p>Among the other technologies being demonstrated is Niagara 2.0. The CXL pooled memory solution enables data sharing to minimize GPU memory shortages during AI inference<sup>7</sup>, making it ideal for LLM models. The company is also demonstrating an HBM with near-memory processing (NMP)<sup>8</sup> which accelerates indirect memory access<sup>9</sup>, a frequent occurrence in HPC. Developed with Los Alamos National Laboratory (LANL), the solution highlights the potential of NMP-enabled HBM to advance next-generation technologies.</p>
<p>Another demonstration is showcasing SK hynix’s updated OCS<sup>10</sup> solution, which offers significant improvements in analytical performance for real-world HPC workloads compared to the iteration <a href="https://news.skhynix.com/sk-hynix-debuts-at-sc23-to-showcase-next-gen-ai-and-hpc-solutions/"><span style="text-decoration: underline;">displayed at SC23</span></a>. Co-developed with LANL, OCS addresses performance issues in traditional HPC systems by enabling storage to independently analyze data, reducing unnecessary data movement and improving resource efficiency. Additionally, the company is demonstrating a checkpoint offloading SSD<sup>11</sup> prototype that improves LLM training resource utilization by enhancing performance and scalability.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>7</sup><strong>AI inference</strong>: The process of using a trained AI model to analyze live data for predictions or task completions.<br />
<sup>8</sup><strong>Near-memory processing (NMP)</strong>: A technique that performs computations near data storage, reducing latency and boosting performance in high-bandwidth tasks like AI and HPC.<br />
<sup>9</sup><strong>Indirect memory access</strong>: A computing addressing method in which an instruction providing the address of a memory location that contains the actual address of the desired data or instruction.<br />
<sup>10</sup><strong>Object-based computational storage (OCS)</strong>: A storage architecture that integrates computation within the storage system, enabling local data processing and minimizing movement to enhance analytical efficiency.<br />
<sup>11</sup><strong>Checkpoint offloading SSD</strong>: A storage solution that stores intermediate data during AI training, improving efficiency and reducing training time.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132258/SK-hynix_SC24_11.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132316/SK-hynix_SC24_12.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132333/SK-hynix_SC24_13.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132349/SK-hynix_SC24_14.png" alt="SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">SK hynix presented various data center solutions, including HBM3E, DDR5, and eSSD products</p>
<p><strong> </strong></p>
<p>In addition to running product demonstrations, SK hynix is also displaying a robust lineup of data center solutions, including its industry-leading HBM3E<sup>12</sup>. The fifth-generation HBM provides high-speed data processing, optimal heat dissipation, and high capacity, making it essential for AI applications. Alongside HBM3E are the company’s rapid DDR5 RDIMM and MCR DIMM products, which are tailored for AI computing in high-performance servers. Enterprise SSDs (eSSDs) including the Gen 5 PS1010 and PEB110 are also on display. Offering ultra-fast read/write speeds, these SSD solutions are vital for accelerating AI training and inference in large-scale environments.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>12</sup><strong>HBM3E</strong>: The fifth-generation High Bandwidth Memory (HBM), a high-value, high-performance product that revolutionizes data processing speeds by connecting multiple DRAM chips with through-silicon via (TSV).</p>
<h3 class="tit">Highlighting the Potential of Memory Through Expert Presentations</h3>
<p>During the conference, Jongryool Kim, research director of AI System Infra, presented on “Memory &amp; Storage: The Power of HPC/AI,” highlighting the memory needs for HPC and AI systems. He focused on two key advancements including near-data processing technology using CXL, HBM, and SSDs to improve performance, and CXL pooled memory for better data sharing across systems.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132400/SK-hynix_SC24_15.png" alt="Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems" width="1000" height="664" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Technical Leader Jeoungahn Park delivering a presentation on OCS" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10132409/SK-hynix_SC24_16.png" alt="Technical Leader Jeoungahn Park delivering a presentation on OCS" width="1000" height="664" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">(From first image) Research Director Jongryool Kim presenting on advancements in memory and storage for HPC and AI systems; Technical Leader Jeoungahn Park delivering a presentation on OCS</p>
<p>&nbsp;</p>
<p>Technical Leader Jeoungahn Park of the Sustainable Computing team also took to the stage for his talk on “Leveraging Open Standardized OCS to Boost HPC Data Analytics.” Park explained how OCS enables storage to automatically recognize and analyze data, thereby accelerating data analysis in HPC. He added how OCS enhances resource efficiency and integrates seamlessly with existing analytics systems, as well as how its analysis performance has been verified in real-world HPC applications.</p>
<p>At SC24, SK hynix is solidifying its status as a pioneer in memory solutions which are driving innovations in AI and HPC technologies. Looking ahead, the company will continue to push technological boundaries with support from its partners to shape the future of AI and HPC.</p>
<p>&nbsp;</p>
<p><a href="https://linkedin.com/showcase/skhynix-news-and-stories/" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="size-full wp-image-15776 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10074354/SK-hynix_Newsroom-banner_1.png" alt="" width="800" height="135" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-680x115.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-768x130.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-innovative-ai-hpc-solutions-at-sc24/">SK hynix Presents Innovative AI & HPC Solutions at SC24</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>[All About AI] The Origins, Evolution &#038; Future of AI</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 14 Oct 2024 06:00:50 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[All About AI]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=15942</guid>

					<description><![CDATA[<p>AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This first episode covers the historical evolution of AI and explains how it became integrated into today’s world. &#160; AI-powered robots that walk, talk, [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/">[All About AI] The Origins, Evolution & Future of AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<div style="border: none; background: #D9D9D9; height: auto; padding: 10px 20px; margin-bottom: 10px; color: #000;"><span style="color: #000000; font-size: 18px;">AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This first episode covers the historical evolution of AI and explains how it became integrated into today’s world.</span></div>
<p>&nbsp;</p>
<p>AI-powered robots that walk, talk, and think like humans have long been a staple of sci-fi comics and movies. However, AI and robotics are no longer merely works of fiction—they have become a reality. Now that AI is here and transforming people’s lives, it is prudent to look back and consider AI’s origins, the milestones which have shaped the technology’s evolution, and consider what the future might hold.</p>
<h3 class="tit">From the Turing Test to Machine Learning: AI’s Early Beginnings</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15944 size-full" title="An overview of AI’s evolution through the decades from the 1950s to the 2020s" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01.png" alt="An overview of AI’s evolution through the decades from the 1950s to the 2020s" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 1. An overview of AI’s evolution through the decades from the 1950s to the 2020s</p>
<p>&nbsp;</p>
<p>The birth of AI can be traced back to the 1950s. In 1950, British mathematician Alan Turing proposed that machines could “think,” introducing what is now known as the &#8220;Turing test” to evaluate this capability. This is widely recognized to be the first study to present the concept of AI. In 1956, the Dartmouth Summer Research Project on Artificial Intelligence formally introduced the term “AI” to the wider world for the first time. Held in the U.S. state of New Hampshire, the conference fueled further debates on whether machines could learn and evolve like humans.</p>
<p>During the same decade, the development of artificial neural network<sup>1</sup> models marked a significant milestone in computing history. In 1957, U.S. neuropsychologist Frank Rosenblatt introduced the “perceptron” model<sup>2</sup>, empirically demonstrating that computers can learn and recognize patterns. This practical application built on the “neural network theory” developed in 1943 by neurophysiologists Warren McCulloch and Walter Pitts, who conceptualized nerve cell interactions into a simple computational model. Despite these early breakthroughs raising high expectations, research in the field soon stagnated due to limitations in computing power, logical framework, and data availability.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Neural network</strong>: A machine learning program, or model, that makes decisions in a manner similar to the human brain. It creates an adaptive system to make decisions and learn from mistakes.<br />
<sup>2</sup><strong>Perceptron</strong>: The simplest form of a neural network. It is a model of a single neuron that can be used for binary classification problems, enabling it to determine whether an input belongs to one class or another.</p>
<p>Then in the 1980s, “expert system” emerged which operated solely based on human-defined rules. These systems could make automated decisions to perform tasks such as diagnosis, categorization, and analysis in practical fields such as medicine, law, and retail. However, during this period, expert systems were limited by their reliance on rules set by humans and struggled to understand the complexities of the real world.</p>
<p>In the 1990s, AI evolved from following human commands to autonomously learning and discovering new rules by adopting machine learning algorithms. This became possible due to the advent of digital technology and the internet, which provided access to vast amounts of online data. At this point, AI was able to unearth new rules even humans could not discover. This period marked the start of renewed momentum for AI research, based on machine learning.</p>
<h3 class="tit">The Rise of Deep Learning: A Key Technology in AI’s Growth</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15945 size-full" title="Timeline showing advances in artificial neural networks and deep learning" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02.png" alt="Timeline showing advances in artificial neural networks and deep learning" width="1000" height="596" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02-671x400.png 671w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02-768x458.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 2. Timeline showing advances in artificial neural networks and deep learning</p>
<p>&nbsp;</p>
<p>While the 1990s presented opportunities for AI to grow, the journey and evolution of AI has had its share of setbacks. In 1969, early artificial neural network research hit a roadblock when it was discovered that the perceptron model could not solve nonlinear problems<sup>3</sup>, leading to a prolonged downturn in the field. However, computer scientist Geoffrey Hinton, often hailed as the “godfather of deep learning,” breathed new life into artificial neural network research with his groundbreaking ideas.</p>
<p>For example, in 1986, Hinton applied the backpropagation<sup>4</sup> algorithm to a “multilayer perceptron” model, essentially layers of artificial neural networks, proving it could address the limitations of the initial perceptron model. This seemed to spark a revival in artificial neural networks research, but as the depth of the networks increased, issues began to emerge in the learning process and outcomes.</p>
<p>In 2006, Hinton introduced the “deep belief network (DBN),” which enhanced the performance of a multilayer perceptron, in his paper “A Fast Learning Algorithm for Deep Belief Nets.” By pre-training each layer through unsupervised learning<sup>5</sup> and then fine-tuning the entire network, the DBN significantly improved the speed and efficiency of neural network learning—which had previously been deemed an issue. This progress paved the way for future advancements in deep learning.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup>The initial perceptron model was a single-layer perceptron that could not solve nonlinear problems such as the XOR problem, which involves two input values; it outputs 0 if the two input values ​​are the same and 1 if they are different.<br />
<sup>4</sup><strong>Backpropagation</strong>: An algorithm used in neural networks to minimize errors by adjusting the weights. It works by calculating the difference between the predicted and actual values and then updating the weights in reverse order, starting from the output layer.<br />
<sup>5</sup><strong>Unsupervised Learning</strong>: A type of machine learning where the model is trained on input data without explicit labels or predefined outcomes. The goal is to discover and understand hidden structures and patterns within the data.</p>
<p>In 2012, deep learning made a historic leap forward when Hinton’s team won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with their deep learning-based model, AlexNet. This triumph demonstrated deep learning’s immense power by recording an error rate of just 16.4%, surpassing the 25.8% of the previous year’s winner.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15946 size-full" title="An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective, IEEE Access, Sept. 2017 p.3)" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03.png" alt="An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective, IEEE Access, Sept. 2017 p.3)" width="1000" height="623" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03-642x400.png 642w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03-768x478.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 3. An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, <em>Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective</em>, IEEE Access, Sept. 2017 p.3)</p>
<p>&nbsp;</p>
<p>Deep learning, a focal point of AI research, has grown rapidly since the 2010s for two primary reasons. First, advances in computer systems, including graphics processing units (GPUs), have driven AI development. Originally designed for graphics processing, GPUs can process repetitive and similar tasks in parallel. This capability enables GPUs to process data faster than central processing units (CPUs). In the 2010s, general-purpose computing on GPUs (GPGPU) emerged, enabling GPUs to be used for broader computational tasks beyond graphics rendering and allowing them to replace CPUs in some instances. The use of GPUs has further increased as they have been utilized for training artificial neural networks, accelerating the development of deep learning. Deep learning, which needs to perform iterative computations during analysis of large datasets to extract features, benefits from the parallel processing capability of GPUs.</p>
<p>Second, the expansion of data resources has fueled progress in deep learning. Training an artificial neural network requires vast amounts of data. In the past, data was primarily sourced from users manually inputting information into computers. However, the explosion of the internet and search engines in the 1990s exponentially increased the range of data available for processing. In the 2000s, the advent of technologies such as smartphones and the Internet of Things (IoT) contributed to the birth of the Big Data era, where real-time information flows from every corner of the globe. Deep learning algorithms use this large quantity of data for training, growing increasingly sophisticated. This data revolution has therefore set the stage for significant advancements in deep learning technology.</p>
<p style="text-align: center;"><iframe loading="lazy" src="https://www.youtube.com/embed/WXuK6gekU1Y?si=W_lg7iEWjh4bfGDb" width="810" height="455" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p>
<p class="source" style="text-align: center;">Figure 4. Google DeepMind’s <em>AlphaGo </em><em>&#8211; The Movie </em>is a documentary film about the epic battle between AlphaGo and Lee Sedol on March 9, 2016</p>
<p>&nbsp;</p>
<p>By 2016, the evolution of AI reached a dramatic turning point with the development of AlphaGo, an advanced AI program created by Google DeepMind to play the board game Go. This extraordinary AI program captivated the world when it defeated Go grandmaster Lee Sedol by an impressive 4-1 score. Combining deep learning with reinforcement learning<sup>6</sup> and Monte Carlo tree search (MCTS)<sup>7</sup> algorithms, AlphaGo learned to mimic human intuition, predict moves, and strategize through tens of thousands of self-played games. AlphaGo’s victory over a legendary human player signaled the beginning of a new AI era.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>6</sup><strong>Reinforcement Learning</strong>: A type of machine learning where an AI agent learns to make decisions by interacting with an environment. The agent receives rewards or penalties based on its actions and aims to maximize cumulative rewards over time by optimizing its strategy.<br />
<sup>7</sup><strong>Monte Carlo tree search (MCTS)</strong>: A stochastic algorithm that repeatedly generates a series of random numbers to derive a numerical approximation of a function&#8217;s value. It structures the possible actions of the current situation into a search tree and uses random simulations to infer the pros and cons of each, ultimately determining the optimal course of action.</p>
<h3 class="tit">ChatGPT: The Catalyst for the Generative AI Boom</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15947 size-full" title="Generative AI explained through key AI subsets" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04.png" alt="Generative AI explained through key AI subsets" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 5. Generative AI explained through key AI subsets</p>
<p>&nbsp;</p>
<p>At the close of 2022, humanity stood on the brink of a transformative leap with AI technology. OpenAI unveiled ChatGPT, powered by a type of LLM<sup>8</sup> known as generative pre-trained transformer (GPT) 3.5, marking the dawn of the generative AI era. Most notably, this leap propelled AI into the creative realm, a domain once considered uniquely human. Now, generative AI can produce high-quality content across diverse formats, moving beyond traditional deep learning, which merely predicts or classifies data. Instead, generative AI, using LLMs or various image generation models such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, creates original results tailored to user needs.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>8</sup><strong>Large language model (LLM)</strong>: Deep learning algorithms that perform a variety of natural language processing tasks by leveraging extensive data.</p>
<p>To provide a clearer context for the evolution of generative AI, it is essential to examine its origins and key developments. The roots of generative AI trace back to 2014, when American scientist and researcher Ian Goodfellow introduced GANs. In GANs, two neural networks engage in a continuous duel: one generates new data from a dataset, while the other network compares this new data to the original dataset to determine its authenticity. Through this iterative process, GANs produce increasingly refined and sophisticated outputs. Over time, researchers have enhanced and expanded upon this model, leading to its widespread use in applications such as image generation and transformation.</p>
<p>In 2017, the natural language processing (NLP)<sup>9</sup> model “transformer” was introduced. This model considers the relationships between data as key variables. By focusing more attention to certain information, transformers can learn complex data patterns and relationships between data, capturing essential details to produce higher quality results. This advancement transformed NLP tasks such as language comprehension, machine translation, and conversational systems, leading to the development of LLMs such as the aforementioned GPT.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>9</sup><strong>Natural language processing (NLP)</strong>: A subfield of AI that uses algorithms to analyze and process natural language data. By examining syntactic structures, semantic relationships, and contextual patterns, NLP systems can perform tasks such as language translation.</p>
<p>First released in 2018, GPTs have rapidly advanced in performance by expanding their parameters and training on data every year. By 2022, OpenAI’s chatbot ChatGPT, powered by GPT-3.5, completely changed the paradigm of AI. ChatGPT, with its exceptional ability to understand user context, deliver relevant responses, and handle diverse queries, quickly gained traction. <a href="https://www.statista.com/chart/29174/time-to-one-million-users/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">Within a week of its launch, it drew over 1 million users</span></a> and attracted <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">more than 100 million active users within two months</span></a>.</p>
<p>The rapid advancements in AI culminated in a major technological leap forward in 2023 with the launch of GPT-4 by OpenAI. This new model is built on a dataset roughly 500 times larger than that of GPT-3.5. GPT-4, now considered a Large Multimodal Model (LMM)<sup>10</sup>, can simultaneously process diverse formats of input data, including images, audio, and video, expanding far beyond its text-only predecessors. In 2024, OpenAI introduced GPT-4o, an enhanced model offering faster, more efficient processing of text, voice, and images. Capitalizing on the generative AI boom triggered by ChatGPT, companies have rolled out diverse services. For example, Google’s Gemini can simultaneously recognize and understand text, images, and audio; Meta’s SAM accurately identifies and isolates objects in images; and OpenAI’s Sora generates videos from text prompts.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>10</sup><strong>Large Multimodal Model (LMM)</strong>: A deep learning algorithm that can handle many types of data, including images, audio, and more, not just text.</p>
<p>The generative AI market is only beginning to unleash its potential. According to a <a href="https://www.idc.com/getdoc.jsp?containerId=prUS51572023" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">report from the global market research firm International Data Corporation (IDC)</span></a>, the market is set to be worth 40.1 billion USD in 2024—2.7 times larger than the previous year. Looking ahead, the market is expected to continue its growth each year and reach 151.1 billion USD by 2027. As generative AI evolves, its influence will extend beyond software to various formats including hardware and internet services. The world can expect a leap in capabilities and a push towards greater accessibility, making cutting-edge AI technology available to an ever-growing audience.</p>
<h3 class="tit">AI’s Impact on Revolutionizing Today and Redefining Tomorrow</h3>
<p>Just as Google search revolutionized the early 2000s and mobile social media reshaped the 2010s, AI is now driving transformative changes across society. The pace of this technological advancement is unprecedented, and the challenges and concerns of humanity are growing along with it.</p>
<p>So what is the “next generative AI”? The most notable technology around today is perhaps on-device AI. Unlike traditional AI that relies on large cloud servers to pull data to edge devices, on-device AI operates directly on electronic devices such as smartphones and PCs through integrated AI chipsets and smaller LLMs (sLLMs). This shift promises to enhance security, conserve resources, and deliver more personalized AI experiences.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15948 size-full" title="Cloud-based AI vs on-device AI structures" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05.png" alt="Cloud-based AI vs on-device AI structures" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 6. Cloud-based AI vs on-device AI structures</p>
<p>&nbsp;</p>
<p>AI will seamlessly integrate into an increasing number of devices, continuously evolving in form and function. Thus, innovations that once seemed like science fiction are becoming reality. For instance, in 2023, U.S. startup Humane launched the AI Pin, a wearable device with a laser-ink display that projects a menu onto the user’s palm. At CES 2024, Rabbit&#8217;s R1 and Brilliant Labs’ Frame showcased their own cutting-edge wearable AI technology. Meanwhile, mixed reality (MR) headsets, like Apple’s Vision Pro and Meta’s Quest, are pushing beyond traditional virtual reality (VR) and metaverse experiences, opening up new markets.</p>
<p>However, as technology races forward, it not only creates new opportunities but also brings about social challenges. The rapid rise of AI has sparked concerns about society’s ability to keep up with these advancements. In particular, AI’s potential misuse and impact on real-world issues has heightened these fears. Sophisticated AI-generated content, such as deepfake videos and manipulated images, creates fake news and disrupts society. Recently, concerns about fake content have intensified in many countries ahead of major elections, including the 2024 U.S. presidential election.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15949 size-full" title="Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06.png" alt="Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 7. Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform</p>
<p>&nbsp;</p>
<p>There are also risks associated with the development and use of AI. As generative AI crawls and merges publicly available web contents to train its AI models, there are concerns about plagiarism. Moreover, copyright disputes can arise from creating content using similar prompts with the same generative AI program. The potential for AI to shift from enhancing productivity to replacing jobs and disrupting the labor market presents a troubling reality for some as well.</p>
<p>AI has created a world beyond human imagination. As this new world unfolds, it is crucial to prepare for the changes ahead. Addressing this new era involves thoughtful planning and social discussion. These action items first require a deep understanding of AI’s potential and implications, which will be provided throughout the All About AI series.</p>
<p>&nbsp;</p>
<p><a href="https://linkedin.com/showcase/skhynix-news-and-stories/" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="size-full wp-image-15776 aligncenter" src=" https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png" alt="" width="800" height="135" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-680x115.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-768x130.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/">[All About AI] The Origins, Evolution & Future of AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Presents Upgraded AiMX Solution at AI Hardware &#038; Edge AI Summit 2024</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Fri, 13 Sep 2024 06:00:11 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[AI Hardware & Edge AI Summit]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=15762</guid>

					<description><![CDATA[<p>A glimpse of SK hynix’s booth at the AI Hardware &#38; Edge AI Summit 2024 &#160; SK hynix unveiled an enhanced Accelerator-in-Memory based Accelerator (AiMX) card at the AI Hardware &#38; Edge AI Summit 2024 held September 9–12 in San Jose, California. Organized annually by Kisaco Research, the summit brings together representatives from the AI [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/">SK hynix Presents Upgraded AiMX Solution at AI Hardware & Edge AI Summit 2024</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15763 size-full" title="A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01.png" alt="A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084255/SK-hynix_AI-HW-Edge-AI-Summit_01-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">A glimpse of SK hynix’s booth at the AI Hardware &amp; Edge AI Summit 2024</p>
<p>&nbsp;</p>
<p>SK hynix unveiled an enhanced Accelerator-in-Memory based Accelerator (AiMX) card at the AI Hardware &amp; Edge AI Summit 2024 held September 9–12 in San Jose, California. Organized annually by Kisaco Research, the summit brings together representatives from the AI and machine learning ecosystem to share industry breakthroughs and developments. This year’s event focused on exploring cost and energy efficiency across the entire technology stack.</p>
<p>Marking its fourth appearance at the summit, SK hynix highlighted how its AiM<sup>1</sup> products can boost AI performance across data centers and edge devices<sup>2</sup>.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Accelerator in Memory (AiM)</strong>: SK hynix’s PIM semiconductor product name, which includes GDDR6-AiM.<br />
<sup>2</sup><strong>Edge device</strong>: Hardware that controls the flow of data at the boundary between two networks. While they fulfill numerous roles, edge devices essentially serve as the entry or exit point to a network.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15764 size-full" title="Attendees gather to learn more about the upgraded AimX card" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02.png" alt="Attendees gather to learn more about the upgraded AimX card" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084310/SK-hynix_AI-HW-Edge-AI-Summit_02-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Attendees gather to learn more about the upgraded AimX card</p>
<p>&nbsp;</p>
<h3 class="tit">Booth Highlights: Meet the Upgraded AiMX</h3>
<p>In the AI era, high-performance memory products are vital for the smooth operation of LLMs<sup>3</sup>. However, as these LLMs are trained on increasingly larger datasets and continue to expand, there is a growing need for more efficient solutions. SK hynix addresses this demand with its PIM<sup>4</sup> product AiMX, an AI accelerator card that combines multiple GDDR6-AiMs to provide high bandwidth and outstanding energy efficiency.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup><strong>Large language model (LLM)</strong>: Advanced AI systems that require extensive datasets to train models to understand and generate human-like language. It enables applications like natural language processing and translation.<br />
<sup>4</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that embeds processing capabilities within memory, minimizing data transfer between the processor and memory. This boosts efficiency and speed, especially for data-intensive tasks like LLMs, where quick data access and processing are essential.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15765 size-full" title="The 32 GB AiMX prototype card was shown publicly for the first time at the event" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03.png" alt="The 32 GB AiMX prototype card was shown publicly for the first time at the event" width="1000" height="666" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-601x400.png 601w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-768x511.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084325/SK-hynix_AI-HW-Edge-AI-Summit_03-900x600.png 900w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">The 32 GB AiMX prototype card was shown publicly for the first time at the event</p>
<p>&nbsp;</p>
<p>At the AI Hardware &amp; Edge AI Summit 2024, SK hynix presented its updated 32 GB AiMX prototype which offers double the capacity of the original card featured at last year’s event. To highlight the new AiMX’s advanced processing capabilities in a multi-batch<sup>5</sup> environment, SK hynix held a demonstration of the prototype card with the Llama 3<sup>6</sup> 70B model, an open source LLM. In particular, the demonstration underlined AiMX’s ability to serve as a highly effective attention<sup>7</sup> accelerator in data centers.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Multi-batch</strong>: A computer processing method in which the system groups together multiple tasks (batches) and processes them at once.<br />
<sup>6</sup><strong>Llama 3</strong>: An open source LLM developed by Meta, featuring pretrained and instruction-fine-tuned language models.<br />
<sup>7</sup><strong>Attention</strong>: Mechanisms which give LLMs context about text, lessening the model’s chance of misunderstandings and allowing it to generate more accurate and contextually relevant outputs.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084340/SK-hynix_AI-HW-Edge-AI-Summit_04.png" alt="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/12084357/SK-hynix_AI-HW-Edge-AI-Summit_05.png" alt="The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities" width="1000" height="666" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities</p>
<p>&nbsp;</p>
<p>AiMX addresses the cost, performance, and power consumption challenges associated with LLMs in not only data centers, but also in edge devices and on-device AI applications. For example, when applied to mobile on-device AI applications, AiMX improves LLM speed three-fold compared to mobile DRAM while maintaining the same power consumption.</p>
<h3 class="tit">Featured Presentation: Accelerating LLM Services from Data Centers to Edge Devices​</h3>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005139/SK-hynix_AI-HW-Edge-AI-Summit_06.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005151/SK-hynix_AI-HW-Edge-AI-Summit_07.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img loading="lazy" decoding="async" class="aligncenter wp-image-4330 size-full" style="width: 800px;" title="Euicheol Lim presenting on how the AiMX system accelerates LLM services" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13005202/SK-hynix_AI-HW-Edge-AI-Summit_08.png" alt="Euicheol Lim presenting on how the AiMX system accelerates LLM services" width="1000" height="666" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Euicheol Lim presenting on how the AiMX system accelerates LLM services</p>
<p>&nbsp;</p>
<p>On the final day of the summit, SK hynix gave a presentation detailing how AiMX is an optimal solution for accelerating LLM services in data centers and edge devices. Euicheol Lim, research fellow and head of the Solution Advanced Technology team, shared the company’s plans to develop AiM products for on-device AI based on mobile DRAM and revealed the future vision for AiM. In closing, Lim emphasized the importance of close collaboration with companies involved in developing and managing data centers and edge systems to further advance AiMX products.</p>
<h3 class="tit">Looking Ahead: SK hynix’s Vision for AiMX in the AI Era</h3>
<p>The AI Hardware &amp; Edge AI Summit 2024 provided a platform for SK hynix to demonstrate AiMX’s applications in LLMs across data centers and edge devices. As a low-power, high-speed memory solution able to handle large amounts of data, AiMX is set to play a key role in the advancement of LLMs and AI applications.</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-presents-upgraded-aimx-solution-at-ai-hw-edge-ai-summit-2024/">SK hynix Presents Upgraded AiMX Solution at AI Hardware & Edge AI Summit 2024</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card &#8216;AiMX&#8217; for Generative AI</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 18 Sep 2023 00:00:48 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[featured]]></category>
		<category><![CDATA[Generative AI accelerator]]></category>
		<category><![CDATA[AiMX]]></category>
		<category><![CDATA[AI Hardware & Edge AI Summit]]></category>
		<category><![CDATA[GDDR6-AiM]]></category>
		<category><![CDATA[PIM]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=12888</guid>

					<description><![CDATA[<p>SK hynix unveiled and demonstrated a prototype of AiMX1, a generative AI accelerator2 card based on GDDR6-AiM, at the AI Hardware &#38; Edge AI Summit 2023 held September 12–14 at the Santa Clara Marriott, California. 1Accelerator-in-Memory based Accelerator (AiMX): SK hynix&#8217;s accelerator card product that specializes in large language models (AI that learns with large [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/">SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card ‘AiMX’ for Generative AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>SK hynix unveiled and demonstrated a prototype of AiMX<sup>1</sup>, a generative AI accelerator<sup>2</sup> card based on GDDR6-AiM, at the AI Hardware &amp; Edge AI Summit 2023 held September 12–14 at the Santa Clara Marriott, California.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Accelerator-in-Memory based Accelerator (AiMX)</strong>: SK hynix&#8217;s accelerator card product that specializes in large language models (AI that learns with large amounts of text data such as ChatGPT) using GDDR6-AiM chips.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>2</sup><strong>Accelerator</strong>: A special-purpose hardware device that uses a chip designed specifically for processing and computing information.</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094216/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_01.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094227/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_02.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094237/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_03.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094247/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_04.png" alt="SK hynix's exhibition booth at the AI Hardware &amp; Edge AI Summit 2023" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 1. SK hynix&#8217;s exhibition booth at the AI Hardware &amp; Edge AI Summit 2023</p>
<p>&nbsp;</p>
<p>Hosted annually by the UK marketing firm Kisaco Research, the AI Hardware &amp; Edge AI Summit brings together global IT companies and high-profile startups to share their developments in artificial intelligence and machine learning. This is SK hynix’s third time participating in the summit.</p>
<p>At the event, the company showcased the prototype of AiMX, an accelerator card that combines multiple <a href="https://news.skhynix.com/sk-hynix-develops-pim-next-generation-ai-accelerator/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">GDDR6-AiMs</span></a> to further enhance performance, along with the GDDR6-AIM itself under the slogan of &#8220;Boost Your AI: Discover the Power of PIM<sup>3</sup> with SK hynix&#8217;s AiM<sup>4</sup>.&#8221;</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup><strong>Processing-In-Memory (PIM)</strong>: A next-generation technology that adds computational capabilities to semiconductor memories to solve the problem of data movement congestion in AI and big data processing.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>4</sup><strong>Accelerator in Memory (AiM)</strong>: SK hynix&#8217;s PIM semiconductor product name, which includes GDDR6-AiM.</p>
<p class="source" style="text-align: center;"><img loading="lazy" decoding="async" class="size-full wp-image-12913 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08.png" alt="The AiMX card utilizes multiple GDDR6-AiM chips for enhanced performance" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094325/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_08-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 2. The prototype AiMX card utilizes multiple GDDR6-AiM chips for enhanced performance</p>
<p>&nbsp;</p>
<p>As a low-power, high-speed memory solution capable of handling large amounts of data, AiMX is set to play a key role in the advancement of data-intensive generative AI<sup>5</sup> systems. The performance of generative AI improves as it is trained on more data, highlighting the need for high-performance products which can be applied to an array of generative AI systems.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>Generative AI</strong>: AI that learns from large amounts of data to actively generate results based on a user&#8217;s specific needs.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-12910" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05.png" alt="Demonstrating a large AI language model with AiMX that utilizes GDDR6-AiM" width="1000" height="670" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-597x400.png 597w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-768x515.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-900x604.png 900w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094257/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_05-400x269.png 400w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 3. Demonstrating a large AI language model with AiMX that utilizes GDDR6-AiM</p>
<p>&nbsp;</p>
<p>SK hynix also demonstrated Meta&#8217;s generative AI Open Pretrained Transformer (OPT) 13B model on a server system equipped with the AiMX prototype. The AiMX system featuring GDDR6-AiM chips reduces data processing time by more than 10 times compared to systems with GPUs, while consuming one-fifth the power. The company&#8217;s demonstration piqued the interest of global companies providing AI services by showing that it can deliver higher performance<sup>6</sup> than the most recent accelerators.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>6</sup>Performance is based on the condition that the AiM Control Hub inside the AiMX card is developed as an application-specific integrated circuit (ASIC).</p>
<p><!-- swiper start --></p>
<div class="swiper-container">
<div class="swiper-wrapper">
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094308/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_06.png" alt="Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX" /></p>
</div>
<div class="swiper-slide">
<p class="img_area"><img decoding="async" class="size-full wp-image-4330 aligncenter" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2023/09/14094319/SK-hynix_AI-Hardware-Edge-AI-Summit-2023_07.png" alt="Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX" /></p>
</div>
</div>
<div class="swiper-button-next"></div>
<div class="swiper-button-prev"></div>
<div class="swiper-pagination"></div>
</div>
<p class="source" style="text-align: center;">Figure 4. Eui-cheol Lim, vice president of SK hynix’s Solution Development division, delivers a presentation on AiMX</p>
<p>&nbsp;</p>
<p>In addition, the company held a session outlining the benefits of AiMX. In a presentation titled &#8220;Cost-Effective Generative AI Inference Acceleration using AiM,&#8221; Eui-cheol Lim, vice president of the Solution Development division, compared the performance of GPUs and AiMX and discussed the future of next-generation intelligent semiconductor memories.</p>
<p>&#8220;SK hynix&#8217;s AiMX is a solution that delivers higher performance while consuming less power, and costing less than conventional GPUs,&#8221; Lim explained. &#8220;We will continue to develop memory technologies that will lead the way in the era of artificial intelligence.&#8221;</p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/sk-hynix-debuts-first-gddr6-aim-accelerator-card-aimx-for-generative-ai/">SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card ‘AiMX’ for Generative AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Beyond GPS: Exploring Positioning Technology through Artificial Intelligence</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/beyond-gps-exploring-positioning-technology-through-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 25 Feb 2021 08:00:53 +0000</pubDate>
				<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Prof.Moon]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[positioning]]></category>
		<category><![CDATA[inertial navigation]]></category>
		<category><![CDATA[FAMU-FSU]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=6507</guid>

					<description><![CDATA[<p>Positioning technology in our world Navigation has become a quintessential part in our daily lives. Smartphones double as car navigation devices, smartwatches can be a hiking trail guide and so on. But how does a device really know where we are? The most common technology is the GPS, the Global Positioning System. Hundreds of kilometers [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/beyond-gps-exploring-positioning-technology-through-artificial-intelligence/">Beyond GPS: Exploring Positioning Technology through Artificial Intelligence</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<h3 class="tit">Positioning technology in our world</h3>
<p>Navigation has become a quintessential part in our daily lives. Smartphones double as car navigation devices, smartwatches can be a hiking trail guide and so on. But how does a device really know where we are? The most common technology is the GPS, the Global Positioning System.</p>
<p>Hundreds of kilometers above our heads, GPS satellites are orbiting the Earth and radiating electromagnetic (EM) signals. By detecting the minute differences in arrival times of those EM signals, a GPS device can pinpoint where someone is standing on the planet. And while the technology is essentially free and requires no subscription, it does require a device that can read GPS signals.</p>
<p>The core of this technology is the satellites themselves, something external that we cannot control. Without the satellites, GPS technologies are of little value. Line of sight to the satellites (even though we cannot see them with our eyes) are critical to the technology’s functionality, which is why GPS navigation frequently fails in tunnels, parking garages, mountainous regions with lush forestry and tall trees, or in crowded cities with skyscrapers. GPS signals can also be attacked and jammed by a 3rd party. When it works, however, GPS provides a relatively accurate result. Overlaying the current ‘position’ estimated by the GPS on top of a map creates a base for a navigator. The remaining job is to perform the ‘positioning’ frequently and update the display.</p>
<p>But does ‘positioning’ generally need a continuous, external aid such as GPS satellites? For humans, the answer is no because we don’t rely on external EM waves to have a morning jog around the neighborhood. Even for a previously unvisited area, if we are armed with a static map – whether in our heads based on previous experiences or a physical paper map – we can position ourselves correctly in that map and navigate to a friend’s new home, for example. Our eyes can perceive how fast we are moving, how far we are from reference points, or how close we are to a decision point such as a turn, landmark or destination. Our body’s ‘positioning’ system is completely integrated and self-sufficient and doesn’t require a continuous aid from an external resource.</p>
<p>Newer electronic devices, such as cars and robot vacuum cleaners, have taken the mimicry of this vision-based approach and even utilize a light spectrum invisible to human eyes, such as infrared, laser and RF waves, for a better ‘visualization’ of the environment. The downside of the vision system, however, is that we need to collect and interpret the data from vision sensors. Inferring direction and speed of a motion from the vision data is not a trivial task. It is a huge computational load that requires powerful processors, as well as large data storage and memory. It also demands a high power and energy consumption. Together, that all adds up to a more expensive system.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031148/210224_02.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031148/210224_02.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<h3 class="tit">Simple approach to location tracking</h3>
<p>Is there a simpler approach for positioning without an extreme amount of computation? In theory, we can use one of the most ubiquitous sensors we have around us – an accelerometer. As a motion-based sensor, an accelerometer requires an almost negligible computational load to determine a position in principle, compared to a vision-based approach. At the same time, an accelerometer is extremely inexpensive. The theoretical operating principle behind it is also intuitive and straightforward.<br />
By definition, acceleration is the change in velocity over a time duration (a=Δv/Δt) and velocity is the change in position over a time duration (v=Δs/Δt). Merging these two relationships and generalizing it for a nonlinear movement, acceleration can be related to the position:</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="size-full wp-image-4330" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24034845/123.jpg" alt="" align="center" /></p>
<p>This simple relationship tells us that the double-time differentiation of the position must be the acceleration. By holding the positional data over time, we can take the double differentiation and accurately determine the acceleration during that trip, obtaining ‘a’ from ‘s.’<br />
Since this is a mathematical formula, we also can determine ‘s’ from ‘a’ by reversing the calculation. In this instance, we would need a double integration:</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="size-full wp-image-4330" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24034844/2345.jpg" alt="" align="center" /></p>
<p>In theory, this indicates that we can perform the double integration to obtain the position if we hold the acceleration data over time, such as the data points reported from the accelerometer. This does, however, create the immense challenge of the ‘self-sufficient’ inertial navigation. To better understand, let’s go back to a lesson from the early years of college. Differentiation shrinks the expression inside and eliminates constants while integration grows the expression inside and generates a constant. ‘Double’ differentiation will eliminate up to linear terms, whereas double integration will grow the contents at a tremendously faster rate.</p>
<p>The challenge in this technology is simply due to this double integration and the unavoidable tiny errors in acceleration data samples. For a hypothetical, slight error in position measurement, differentiation would diminish the effect of the error over time, and even more so for the double differentiation (i.e., from s to a). On the other hand, a slight error in acceleration measurement would grow with an integration, and even bigger and quicker with the double integration (i.e., from a to )s. For example, a quantization error, a mechanical bias in an accelerometer, a miscalibration, and even undetectable defects that are under manufacturing tolerances, always exist in the captured acceleration data.</p>
<p>If this acceleration goes through double integration for positioning purposes, these tiny errors are all double-integrated, without bound. If we take this approach, a static object on your desk will have a moving trajectory as soon as the double integration starts. If we watch longer – that is, the integration time gets longer – then the object will continuously accelerate away from you, three-dimensionally. Within a few seconds, the double integration will report that the object has arrived at the Moon. This ‘drift’ due to the integrated error over time is a nightmarish problem for self-sufficient inertial navigation known as ‘Dead-Reckoning.’</p>
<h3 class="tit">How to reduce errors with inertial navigation</h3>
<p>There have been efforts to limit how much error can be produced in each sampling, such as an object-dependent physical limitation (e.g., humans cannot move faster than a certain distance per step), and determination of possible motion ranges (e.g., multiple inertial measurement units (IMU) that include accelerometers, gyroscopes, and magnetometers can be placed at multiple locations of the moving object to detect and limit impossible motions.) They are effective to a certain degree as the error is at least bound by the set ‘rules’ would keep it from accelerating away from you at an astronomical speed. Yet, the problem on the snowballing error in integration remains and is fundamentally impeding ‘accurate’ positioning (e.g., a still object).</p>
<p>In order to completely suppress the error, the traditional approach has been the investment of more hardware, especially non-motion-based sensors, such as vision and laser. However, with the involvement of other error-bounding sensors, the benefits of inertial navigation solely with motion-based sensors – low computation complexity, cheaper construction, low power consumption and so on – now dissipates. This has limited the usage of an inertial navigation system mostly to spacecraft and aircraft applications that can afford such requirements for a short period of time, keep the inaccuracy of the positional estimate under certain levels. An inertial navigation system was used, for example, in the Apollo space shuttle and has also been used to supplement flight automation and navigation systems in Boeing 747s and US military aircraft.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031153/210224_03.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031153/210224_03.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p>Under practical applications, the integration of the acceleration data is routinely carried out in a Kalman filter, where extra sensor outputs such as a gyroscope or magnetometer can further enhance the performance of the positional estimation. When the ‘prediction,’ or ‘estimation,’ is highly nonlinear to the input, such as our double integration, an Extended Kalman filter (EKF) is used. The “error” or “noise” characteristics will be included in the EKF system and considered a natural input to the system. The noise characteristics will be modeled with utmost precision to eliminate (or accurately account for) its effects during the double integration – again, in principle. However, aforementioned ‘tiny’ noises in measurements – such as a quantization error, a mechanical bias in an accelerometer, miscalibration, and even undetectable defects that are under manufacturing tolerances &#8211; can be dynamically changing during and between the sensor operations, rendering the precise modeling of such error sources a nearly impossible task.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031156/210224_04.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031156/210224_04.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031201/210224_05.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031201/210224_05.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<h3 class="tit">AI-assisted Dead-Reckoning</h3>
<p>With a recent, strong emergence of Artificial Intelligence (AI) technology and deep neural networks, a great opportunity has surfaced for enabling automatic learning of the ‘noise parameters’ and relevant customizations that are beneficial for the IMU-based self-sufficient inertial navigation. Figure 1 shows the traditional approach with the IMU measurements, noise modelings, and the EKF, whereas Figure 2 illustrates the trending approach without the noise modeling, which fully utilizes the machine learning-based engine for the automatic noise characterization. Figure 3, excerpted from the “AI-IMU Dead-Reckoning” report<sup>1</sup>, shows a promising result for automotive applications.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031205/210224_06.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031205/210224_06.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p>For verification purposes, the solid black curve, denoted as “GPS,” is provided as the ground truth. The blue curve, denoted as “IMU,” is the result of the direct double integration of the acceleration. As expected, it suffers from the diverging integration error and veers off from the ground truth in the initial stages. The dashed green curve, denoted as “AI Engine,” is the result with the EKF system, aided by the AI Engine producing the adaptive noise parameters. The AI approach is surprisingly effective and accurate, compared to the ground truth using GPS. An interesting aspect of this plot is that the GPS actually malfunctioned embarrassingly during this trip &#8211; denoted in the figure as “GPS outage.” The “ground truth,” in fact, was not the real truth as it could not report accurate position during the outage section. Meanwhile, the AI-enhanced IMU-only dead reckoning presented the precise location during the GPS outage. In fact, this AI-based dead reckoning is even comparable in performance to the LiDAR and powerful vision-based approaches. The physical size, power consumption, and the cost of these powerful positioning systems are unbearably high, comparable to the AI-fueled dead reckoning method.</p>
<p>Please note that Figure 3 is only 2-D implementation of a potentially full 3-axis travel and made at a vehicular level with the relevant units of km, km/h, and meter (resolution). It would pose different requirements in the AI-engine design and sensor capabilities for a human-level and -scale navigation, especially regarding the minimal resolution in the positional estimation. These varieties are actively being investigated by academics.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img decoding="async" class="alignnone size-full wp-image-4330" style="width: 800px;" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031143/210224_07.jpg" alt="" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24031143/210224_07.jpg" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<h3 class="tit">Impact of positioning technology</h3>
<p>The impact and applicability of this technology are immense. Compared to existing navigation technologies, it will enable navigation that is self-sufficient, extremely low-power, tremendously economical and environmentally independent &#8211; for example, weather, electromagnetic interference, trees, buildings, line-of-sight and so on. Autonomous moving objects, such as vehicles, robots or bikes, can feature an accurate, self-standing navigation technology on top of other navigation aids at a negligible cost addition. Indoor navigation for humans, pets, carts, and other objects also will have endless combinations of use cases. Likewise, it also can retrofit easily into existing devices, as the location estimation engine is purely at a software level. Already, we have accelerometers and gyroscopes all around us in abundance – via our smartphones. There might even be a cool (or fun) app soon that utilizes this incredible advancement in technology.<br />
The AI software technologies are currently conquering some of the most challenging engineering problems in unexpected ways. Recent advancements in semiconductor technologies have essentially enabled such innovations, providing explosively increasing computational power and memory capacities at lower costs. Hardware manufacturers, including SK hynix, will stay extremely busy keeping up with the never-ending appetites of the AI software technologies on critical hardware equipment, including large data storages for massive amounts of training data for deep neural networks, such as SSD cards integrated with NAND Flash, high-speed and high-capacity memories such as DRAM (DDR4, DDR5, HBM2E, GDDR6 etc.), and fast processors.</p>
<p><!-- 각주 스타일 --></p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p>&nbsp;</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>M. Brossard, A. Barrau and S. Bonnabel, &#8220;AI-IMU Dead-Reckoning,&#8221; in IEEE Transactions on Intelligent Vehicles, vol. 5, no. 4, pp. 585-595, Dec. 2020, doi: 10.1109/TIV.2020.2980758.</p>
<p><!-- //각주 스타일 --></p>
<p><!-- namecard --></p>
<div class="namecard">
<p><img decoding="async" class="alignnone size-full wp-image-3446" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2021/02/24033111/namecard_Jung_Il_Park-1.png" alt="" /></p>
<div class="name">
<p class="tit">By<strong>Jinyeong Moon Ph.D.</strong></p>
<p><span class="sub">Assistant Professor<br />
Electrical &amp; Computer Engineering<br />
FAMU-FSU College of Engineering<br />
</span></p>
</div>
</div>
<p><!-- //기고문 스타일 --></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/beyond-gps-exploring-positioning-technology-through-artificial-intelligence/">Beyond GPS: Exploring Positioning Technology through Artificial Intelligence</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
