Expansion Race

An HBM3 chip produced by SK hynix
An HBM3 chip produced by SK hynix

Since December last year, the global frenzy over the generative AI known as ChatGPT has rapidly increased the demand for high-performance DRAM, also called High Bandwidth Memory (HBM), capable of processing large-scale data.

According to sources in the semiconductor industry on July 31, domestic memory semiconductor companies such as Samsung Electronics and SK hynix are pushing for the expansion of dedicated HBM lines. HBM, also referred to as “high-performance DRAM,” enhances data capacity and speed more than tenfold compared to conventional DRAM. Memory semiconductor companies complete HBM by vertically stacking the produced DRAM wafers and connecting them electrically through advanced packaging processes that drill holes in the chips.

The two companies plan to invest more than 2 trillion won by the end of next year to more than double the current production capacity of the HBM line. SK hynix is planning to utilize the spare space in the Cheongju factory following its existing HBM production base in Icheon. Samsung Electronics is considering expanding the core line of HBM in Cheonan, South Chungcheong Province, where the Advanced Packaging team under the Device Solutions Division is located.

Despite the yet unimproved DRAM market conditions, Samsung and SK are promoting investments in the trillion-won unit due to rapidly increasing HBM demand. According to market research firm TrendForce, HBM demand measured in gigabytes (GB) is projected to surge by 60%, from 181 million GB in 2022 to 290 million GB in 2023. It is expected to increase by an additional 30% in 2024 compared to 2023.

Demand is rapidly increasing because the generative AI market, like ChatGPT, is growing. The core of generative AI is improving the accuracy of answers to questions through large-scale data learning. Graphics Processing Units (GPUs) have been chosen as the chips that can rapidly learn large-scale data simultaneously. DRAM, which stores data for the GPU to process, is also essential. The latest HBM, “HBM3,” is known to have a total capacity 12 times larger and a bandwidth about 13 times larger than the latest DRAM product, GDDR6. It’s akin to the difference between a single-lane road (typical DRAM) and a 13-lane highway (HBM), facilitating smoother traffic, or data flow.

Customers and HBM manufacturers typically discuss product development for over a year and produce customized products. HBM2E and HBM3 from SK hynix, which Nvidia recently received, are also products developed jointly 1-2 years ago. An industry official in the semiconductor sector reported, “Major companies developing their own generative AI are also asking Samsung and SK to produce HBM together.”

HBM is thawing the frozen memory semiconductor market. Although Samsung and SK hynix have not revealed the exact HBM prices, the price of the latest 4th generation product, HBM3, is known to be about 5-6 times that of the latest conventional DRAM. This is why, although the HBM shipment volume this year is 1.7% of the total DRAM shipment volume, its sales ratio reaches 11%.

It also has a positive impact on improving the performance of memory semiconductor companies. SK hynix explained at its 2nd quarter performance briefing held on July 26 that “the average selling price (ASP) of DRAM in the second quarter rose ‘in the high single digit percentage’ compared to the previous quarter” and that “it is due to the impact of selling many high-value-added products.”

However, to become a true “cash cow,” it has been pointed out that they need to further increase HBM yield and lower the production cost. A professor of semiconductor studies at a university in the metropolitan area said, “The more DRAM you stack, the lower the yield and the higher the cost” and “they must also overcome the limitation of ‘generative AI server’ as the only demand source.”

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution