Welcome Revitalization

A Samsung combination high-bandwidth (HBM) and processing-in-memory (PIM) chip
A Samsung combination high-bandwidth (HBM) and processing-in-memory (PIM) chip

The semiconductor industry’s race to develop next-generation memory solutions, including high-bandwidth memory (HBM), processing-in-memory (PIM), and compute express link (CXL), sees two front runners in Samsung Electronics and SK hynix. In particular, as demand for HBM3 for AI has increased faster than expected due to the growth of generative artificial intelligence (AI), the stagnant memory semiconductor market has been revitalized.

According to industry sources on Aug. 13, Samsung Electronics unveiled a series of high-performance and high-capacity memory solution technologies including HBM, HBM-PIM, CXL DRAM, and CXL-processing near memory (PNM) at international memory semiconductor events such as Memcon and Flash Memory Summit this year.

Samsung Electronics, leading the way in the second and third generations of HBM, lost the chance of becoming the chipmaker with the world’s first mass-production of HBM3 (fourth generation) to SK hynix without anticipating the rapid growth of the AI market. In response, Samsung Electronics aggressively entered a race to develop and mass-produce next-generation DRAM to regain its leadership in AI semiconductors.

In May, Samsung Electronics developed the industry’s first 128 gigabyte (GB) DRAM that supports CXL 2.0 and is scheduled to roll it out within the year. This comes a year after the company developed the world’s first CXL 1.1-based DRAM in May last year. Unlike conventional methods that limit the amount of DRAM available per central processing unit (CPU), CXL consolidates multiple interfaces into one, enabling direct communications between each device. This makes it possible to expand the amount of server memory that can be handled by DRAM to tens of terabytes (TB).

In October 2022, SK hynix succeeded in developing the industry’s first Computational Memory Solution (CMS) that integrates computational functions into CXL memory.

A PIM race is also heating up. Currently, memory plays the role of storage and system semiconductors such as central processing units (CPUs) and graphics processing units (GPUs) carry out computation, but PIM technology enables data computation inside memory. Using HBM-PIM technology, AI model generation performance is improved by 3.4 times compared to GPU accelerators with conventional HBM. Samsung’s HBM-PIM was featured in AMD’s GPU MI-100 in October 2022.

CXL-based PNM reduces data bottlenecks by placing system semiconductors with computational capabilities next to memory. Compared to conventional GPU accelerators, it increases DRAM capacity four times and doubles the loading speed of AI models. While still in the early stages of development, the memory industry expects to see the wide adoption of CXL-based PNM technology as demand for high-capacity AI model processing expands.

“The rapid growth of generative AI has significantly changed the paradigm of memory development,” said an industry insider. “Micron is one step behind in next-generation memory technology, so a Samsung-SK hynix race will continue for some time.”

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution