<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>HBM2 - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/hbm2/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Tue, 05 Dec 2023 12:40:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>Continuing to Make HBM History: The Story of SK hynix’s HBM Development</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/the-story-of-sk-hynixs-hbm-development/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Thu, 08 Sep 2022 00:00:17 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[DRAM]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[HBM]]></category>
		<category><![CDATA[HBM2E]]></category>
		<category><![CDATA[HBM2]]></category>
		<category><![CDATA[HBM3]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=9772</guid>

					<description><![CDATA[<p>When SK hynix became the first in the industry to develop HBM3, its latest HBM (High Bandwidth Memory) product, the company not only took its place in the record books but also firmly maintained its DRAM market leadership. SK hynix announced HBM3’s development in October 2021, with the company beginning to mass produce the product [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/the-story-of-sk-hynixs-hbm-development/">Continuing to Make HBM History: The Story of SK hynix’s HBM Development</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>When SK hynix became the first in the industry to develop HBM3, its latest HBM (High Bandwidth Memory) product, the company not only took its place in the record books but also firmly maintained its DRAM market leadership.</p>
<p><a href="https://news.skhynix.com/sk-hynix-announces-development-of-hbm3-dram/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">SK hynix announced HBM3’s development in October 2021</span></a>, with the company beginning to mass produce the product in June 2022. SK hynix will also <a href="https://news.skhynix.com/sk-hynix-to-supply-industrys-first-hbm3-dram-to-nvidia/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">provide HBM3 for NVIDIA systems that are expected to begin shipping in the third quarter of 2022</span></a>.</p>
<p>How did the company maintain its leadership position in this market, and what lessons did it implement from developing the previous generations of HBM products?</p>
<h3 class="tit"><img loading="lazy" decoding="async" class="size-full wp-image-9774 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045833/History-of-HBM-developmentThumbnail.png" alt="" width="680" height="400" /></h3>
<h3 class="tit">HBM1 &#8211; First Mover from the Get-go</h3>
<p>When HBM was developed as a memory solution optimized for high-level computing performance (HCP), it offered a new paradigm on solving the memory bottleneck as it aimed to increase capacity and bandwidth simultaneously. SK hynix jointly developed the world’s first TSV (Through Silicon Via) HBM product with AMD in 2014. The two companies <a href="http://www.koreaherald.com/view.php?ud=20131219000838\" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">also teamed up to develop high-bandwidth 3-D stacked memory technologies and related products</span></a>.</p>
<p>HBM1’s <a href="https://news.skhynix.com/diversification-of-dram-application-and-memory-hierarchy/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">operating frequency is around 1,600 Mbps, the VDD (drain power voltage) is 1.2V, and the die density is 2Gb (4-hi)</span></a>. The product had a <a href="https://web.archive.org/web/20150424141343/http:/www.setphaserstostun.org/hc26/HC26-11-day1-epub/HC26.11-3-Technology-epub/HC26.11.310-HBM-Bandwidth-Kim-Hynix-Hot%20Chips%20HBM%202014%20v7.pdf" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">higher bandwidth than the DDR4 and GDDR5 products</span></a>, while using less power in a substantially smaller form factor, benefitting bandwidth-hungry processors such as GPUs (graphics processing units).</p>
<p>&nbsp;</p>
<h3 class="tit">HBM2 &#8211; Second Generation Home Improvements</h3>
<p>In the second generation HBM2, released in 2018, a key improvement was the Pseudo Channel mode.  This mode divides a channel into two separate 64-bit I/O sub-channels, as well as providing 128-bit prefetch per memory read and write access for each sub-channel. The mode optimizes memory accesses and lowers latency, resulting in higher effective bandwidth.</p>
<p>Other improvements included lane remapping modes for both hard and soft repairs of lanes, as well as anti-overheating protection. The newer technologies, alongside HBM2’s higher effective bandwidth, give it a higher energy efficiency than HBM1 at its data-rate.</p>
<p>&nbsp;</p>
<h3 class="tit">HBM2E – Round Three Game Changer</h3>
<p>SK hynix was also the first memory vendor to begin mass producing <a href="https://product.skhynix.com/products/dram/hbm/hbm2e.go" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">HBM2E</span></a>, an extended version of HBM2. The <a href="https://news.skhynix.com/behind-the-scenes-story-ofhbm2e-the-fastest-dram-in-history/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">HBM2E development team’s determination to raise the product’s specifications at the planning stage</span></a> played a critical role in helping SK hynix to maintain its leadership position. The product was released two years after HBM2 in 2020, with technological updates and more applications, as well as a faster speed and higher capacity than HBM2.</p>
<p>The <a href="https://news.skhynix.com/sk-hynix-starts-mass-production-of-high-speed-dram-hbm2e/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">product’s 16Gb die density was double that of HBM2</span></a>, achieved by vertically stacking eight 16Gb chips via TSV technology. At the time of its release, HBM2E had <a href="https://product.skhynix.com/products/dram/hbm/hbm2e.go" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">the industry&#8217;s fastest memory at 3.6Gbps in I/O speed and processing 460GB of data per second using 1,024 I/Os</span></a>. HBM2E also has 36% better heat dissipation than HBM2.</p>
<p>&nbsp;</p>
<h3 class="tit">HBM3 &#8211; Maintaining Leadership into the Fourth Generation</h3>
<p>SK hynix continued maintaining its leadership status with <a href="https://product.skhynix.com/products/dram/hbm/hbm3.go" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">HBM3</span></a>, which was the world’s first of its kind when the company developed it in 2021. HBM3 has 1.5 times HBM2E’s capacity from 12 DRAM die stacked to the same total package height, enabling it to power capacity-intensive applications such as AI and HPC.</p>
<p>A significant addition in HBM3 compared to previous generations is a <a href="https://product.skhynix.com/products/dram/hbm/hbm3.go" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">custom-designed on-die ECC (Error Correcting Code)</span></a>, which uses pre-allocated parity bits to check and correct errors in the data received. The code also allows DRAM to self-correct errors within cells, enhancing device reliability.</p>
<p>HBM3’s <a href="https://news.skhynix.com/sk-hynix-at-nvidia-gtc-2022-demonstrating-the-worlds-fastest-dram-hbm3/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">16-channel architecture runs at 6.4Gbps</span></a>, double that of HBM2E and currently the fastest in the world. This makes HBM3 and other HBM products an indispensable component of digital life, such as HBM products becoming a prerequisite for the Levels 4 and 5 of driving automation in autonomous vehicles.</p>
<p>SK hynix’s HBM development is also an important component of the company’s ESG efforts, with each generation of the product designed to consume less power than the previous one. For example, HBM3 runs at lower temperatures than HBM2E at the same level of operating voltage, enhancing the stability of the server system environment, and increasing cooling capacity.</p>
<p>&nbsp;</p>
<p>After celebrating its HBM3 achievements, the development team has already moved onto the next step, cooperating with clients, and receiving feedback on the product. Predictions are also already out for HBM4, which could be more widely used in areas such as high-performance data centers, super computers, and artificial intelligence.</p>
<p>The HBM market also continues its steady growth, with the volume of data transmission increasing rapidly in the 5G era and a 2021 report by Omdia <a href="https://omdia.tech.informa.com/-/media/tech/omdia/brochures/ai/dram-for-graphics-ai-report---2021.aspx?rev=9025f25d809d48aca5fbec79b67d6850&amp;hash=9928D22EEF52761FBB7272C7899187A6" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">predicting that the market will generate $2.5 billion in revenue by 2025</span></a>. SK hynix is looking to secure its leadership in the market by continuing to take its HBM products to the next level and maintaining its position as not only a “first mover”, but also a “solution provider”.</p>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-9775 aligncenter" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045919/History-of-HBM-development.png" alt="" width="1000" height="1913" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045919/History-of-HBM-development.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045919/History-of-HBM-development-209x400.png 209w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045919/History-of-HBM-development-768x1469.png 768w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2022/09/06045919/History-of-HBM-development-535x1024.png 535w" sizes="(max-width: 1000px) 100vw, 1000px" /></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/the-story-of-sk-hynixs-hbm-development/">Continuing to Make HBM History: The Story of SK hynix’s HBM Development</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Creating New Values in DRAM Using Through-Silicon-Via Technology for Continued Scaling in Memory System Performance and Capacity</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/creating-new-values-in-dram-using-through-silicon-via-technology-for-continued-scaling-in-memory-system-performance-and-capacity/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 20 Nov 2019 01:25:22 +0000</pubDate>
				<category><![CDATA[Opinion]]></category>
		<category><![CDATA[HBM]]></category>
		<category><![CDATA[HBM2E]]></category>
		<category><![CDATA[HBM2]]></category>
		<category><![CDATA[TSV]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=3954</guid>

					<description><![CDATA[<p>With the recent rapid growth and vast expansion of artificial intelligence (AI), machine learning, high- performance computing, graphics, and network applications, demand for memory for higher performance has been growing more than ever before. However, traditional main memory DRAM alone has not been sufficient to satisfy such system requirements. The demand for higher capacity, on [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/creating-new-values-in-dram-using-through-silicon-via-technology-for-continued-scaling-in-memory-system-performance-and-capacity/">Creating New Values in DRAM Using Through-Silicon-Via Technology for Continued Scaling in Memory System Performance and Capacity</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>With the recent rapid growth and vast expansion of artificial intelligence (AI), machine learning, high- performance computing, graphics, and network applications, demand for memory for higher performance has been growing more than ever before. However, traditional main memory DRAM alone has not been sufficient to satisfy such system requirements. The demand for higher capacity, on the other hand, is primarily being driven by server applications in data centers. The capacity of the memory- subsystem has traditionally been scaled out by increasing the number of memory channels per socket and adopting higher density DRAM Dual-Inline-Memory-Modules (DIMMs). However, even with the state-of-the-art 16Gb (Gigabit) DDR4 DRAMs, system memory capacity requirements can become insufficient for certain applications such as in-memory databases.</p>
<p>Through-Silicon-Via (TSV) in memories has emerged as an efficient foundational technology for capacity expansion and bandwidth extension. It is a technology where vias are perforated through the entire silicon wafer thickness, in order to form thousands of vertical interconnections from the front to the back-side of the die and vice versa. In the earlier days, TSV was regarded merely as a packaging technology, simply replacing wire-bondings. But over the years, it has become an essential tool to scale performance and density of DRAMs. Today, there are two main use cases in the DRAM industry, where TSVs have been successfully productized to overcome capacity and bandwidth scaling limitations. These are 3D-TSV DRAM and High-Bandwidth-Memory (HBM).</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3984" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085950/3D_TSV_DRAM_and_HBM.png" alt="" width="800" height="504" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085950/3D_TSV_DRAM_and_HBM.png 800w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085950/3D_TSV_DRAM_and_HBM-635x400.png 635w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085950/3D_TSV_DRAM_and_HBM-768x484.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085950/3D_TSV_DRAM_and_HBM.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p>High density memories such as 128 and 256GB DIMMs (16Gb based 2rank DIMMs with 2High and 4High X4 DRAMs) are also adopting 3D-TSV DRAMs in addition to traditional Dual-Die-Packages (DDP) having wire-bonded die stacks. In 3D-TSV DRAMs, 2 or 4 DRAM dies are stacked on top of each other, where only the bottommost die is connected externally to the memory controller. The remaining dies are interconnected through many TSVs internally providing Input/Output (I/O) load isolations. Compared to DDP structures, such architecture allows higher pin speeds through decoupling of I/O loadings, and lower power consumption by eliminating unnecessary duplication of circuit components across the stacked dies.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3983" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085944/3D_TSV_DRAM_and_Dual-Die_Package.png" alt="" width="800" height="504" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085944/3D_TSV_DRAM_and_Dual-Die_Package.png 800w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085944/3D_TSV_DRAM_and_Dual-Die_Package-635x400.png 635w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085944/3D_TSV_DRAM_and_Dual-Die_Package-768x484.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085944/3D_TSV_DRAM_and_Dual-Die_Package.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p>On the other hand, HBM has been primarily created to bridge the bandwidth gap between the system- on-a-chip (SoC) high bandwidth demands and maximum bandwidth supply capabilities of main memories. For example, in AI applications, bandwidth demands per SoC, particularly in training applications, can exceed a few TB/s which cannot be satisfied with conventional main memories. A single main memory channel with 3200Mbps DDR4 DIMMs can provide only 25.6GB/s bandwidth. Even the most advanced CPU platforms with 8 memory channels can provide only 204.8GB/s. On the other hand, 4 HBM2 stacks around a single SoC can provide &gt;1TB/s bandwidth, capable of bridging their bandwidth gaps. Depending on applications, HBMs can be used either stand-alone, as a cache or as the first tier in a two-tier memory.</p>
<p>HBM is an in-package memory where it is integrated with a SoC through a silicon interposer inside the same package. This allows it to overcome the maximum number of data I/O package pin limitations, which would otherwise exist in conventional off-chip packages. HBM2, which has already been deployed in actual products, consists of 4 or 8-high stacks of 8Gb dies and 1024 data pins running at 1.6~2.4Gbps each. This results in 4 or 8GB density and 204~307GB/s bandwidth per HBM stack.</p>
<p><!-- 이미지 사이즈 지정해서 업로드 --></p>
<p class="img_area"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3982" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085938/SoC-HBM_System-in-Package_SiP.png" alt="" width="800" height="504" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085938/SoC-HBM_System-in-Package_SiP.png 800w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085938/SoC-HBM_System-in-Package_SiP-635x400.png 635w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085938/SoC-HBM_System-in-Package_SiP-768x484.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p class="download_img"><a class="-as-download -as-ga" href="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/11/20085938/SoC-HBM_System-in-Package_SiP.png" target="_blank" rel="noopener noreferrer" download="" data-ga-category="sk-hynix-newsroom" data-ga-action="download" data-ga-label="download_image">Image Download</a></p>
<p>SK hynix has always been committed to maintaining industry leadership in both HBM and high-density 3D-TSV DRAM products. Recently, SK hynix announced the successful development of the HBM2E device, an extended version of HBM2, which reaches higher density of up to 16GB and bandwidth of 460GB/s per stack. This was made possible through the increase of DRAM die density to 16Gb and achieving 3.6Gbps per pin speed over 1024 data IOs at 1.2V supply voltage. SK hynix is also in the process of expanding line-ups of 128~256GB 3D-TSV DIMMs to satisfy the needs of its customers for higher density DIMMs.</p>
<p>TSV technology has now reached its maturity to some extent, being able to build state-of-the-art products such as HBM2Es with thousands of TSVs. In the future, however, decreasing the TSV pitch<sup>1</sup>/diameter/aspect-ratio<sup>2</sup> and the die thickness, while still maintaining high assembly yields, will become more challenging and essential for continued future device performance and capacity scaling. Such improvements will allow decreased TSV loadings, reduced TSV relative die size portions, and the extended number of stacks beyond 12Highs while still maintaining the same total physical stack height. SK hynix will continue to be dedicated to positioning itself at the forefront of memory technology leadership through endless innovations in TSV products and technology.</p>
<p>&nbsp;</p>
<div style="border-top: 1px solid #e0e0e0;"></div>
<p>&nbsp;</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup>The distance between two vias<br />
<sup>2</sup>The ratio of the height to the diameter of the TSV</p>
<p>&nbsp;</p>
<p><!-- namecard --></p>
<div class="namecard">
<p><img decoding="async" class="alignnone size-full wp-image-3446" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2019/10/20085209/namecard_011.png" alt="" /></p>
<div class="name">
<p class="tit">By<strong>Uksong Kang</strong></p>
<p><span class="sub">Vice President and Head of DRAM Product Planning at SK hynix</span></p>
</div>
</div><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/creating-new-values-in-dram-using-through-silicon-via-technology-for-continued-scaling-in-memory-system-performance-and-capacity/">Creating New Values in DRAM Using Through-Silicon-Via Technology for Continued Scaling in Memory System Performance and Capacity</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
