Ultra-high-performance DRAM to be used for AI

Samples of the newest 5th generation High Bandwidth Memory modules from SK hynix.
Samples of the newest 5th generation High Bandwidth Memory modules from SK hynix.

SK hynix has successfully developed the world’s first 5th generation High Bandwidth Memory (HBM), setting a new benchmark for high-performance DRAM.

On Aug. 21, SK hynix announced, “We have successfully developed ultra-high-performance DRAM for Artificial Intelligence (AI), dubbed the 5th generation HBM (HBM3E).” They added, “To undergo validation procedures, we have started supplying samples to our clients.” This is the first time SK hynix has developed a 5th generation HBM for client verification.

SK hynix’s HBM3E is capable of processing over 1.15 terabytes (TB) of data per second, which equates to processing data from more than 230 full-HD movies in a single second. Heat dissipation performance has improved by approximately 10% compared to its predecessor.

Having produced the 4th generation HBM3, SK hynix has outperformed Samsung Electronics to secure the title of “world’s first” in the 5th generation product development race as well. Analysts believe that their consistent investment in research and development (R&D) for nearly 10 years since they first developed HBM in 2013 has borne fruit.

SK hynix emphasized, “Based on our exclusive experience of mass-producing HBM3, we successfully developed the extended version with the world’s best performance, HBM3E. Given our extensive HBM supply experience and production maturity, we aim to consolidate our unparalleled position in the AI memory market by initiating mass production of HBM3E in the first half of next year.”

The 5th generation HBM3E, unveiled by SK hynix on Aug. 21, is touted as a crucial component for the latest AI accelerators. AI accelerators are semiconductor packages specialized for large-scale data training and inference essential for generative AI. They maximize data processing performance through advanced packaging that places multiple HBMs vertically next to Graphics Processing Units (GPU) and Central Processing Units (CPU). Notably, NVIDIA’s H100 AI accelerator, used in OpenAI’s ChatGPT service, incorporates SK hynix’s HBM3 DRAM.

HBM3E is likely to be featured in NVIDIA’s GH200 next-generation AI accelerator set to be unveiled in the second half of next year. It’s known that SK hynix has sent actual HBM3E samples to NVIDIA for performance testing. Jensen Huang, NVIDIA’s CEO, mentioned in an event on Aug. 8 that they would “begin mass production of the ‘GH200 Grace Hopper Superchip’ equipped with HBM3E next year.”

Furthermore, SK hynix applied the Advanced MR-MUF technology to HBM3E, enhancing its heat dissipation performance by roughly 10% compared to its predecessor. Since HBMs involve vertically connecting several DRAM chips, the technology to efficiently dissipate heat through packaging is vital. The MR-MUF process involves injecting a liquid protective material between chips and solidifying it, providing a cooling advantage over the traditional method of placing a film material with each chip. Additionally, HBM3E is backward compatible, allowing new products to be applied without any design or structural changes in systems developed with HBM3 in mind.

Samsung Electronics, feeling the competitive pressure, plans to unveil their 5th generation HBM product “HBM3P” in the latter half of this year, aiming to rival SK hynix. The product is likely to be named “Snowbolt.” They also plan to produce a 6th generation HBM next year.

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution