Nvidia is at the forefront of developing chips for artificial intelligence applications.
Nvidia is at the forefront of developing chips for artificial intelligence applications.

American fabless semiconductor company NVIDIA has unveiled its latest Artificial Intelligence (AI) accelerator, the “H200.” The H200 is distinguished by its performance, offering approximately double the data processing speed and capacity compared to its predecessor. The extensive application of the 5th generation High Bandwidth Memory (HBM) in the H200 forecasts a special boon for memory semiconductor companies like Samsung Electronics and SK hynix.

NVIDIA announced the new AI accelerator H200 on Nov. 13, local time. AI accelerators are semiconductors specialized for large-scale data learning and inference. They are manufactured by packaging Graphics Processing Units (GPUs), Central Processing Units (CPUs), HBM, and other components together.

The H200 is an upgraded version of the H100, which has seen worldwide companies competing to purchase for applications like training OpenAI’s latest large language model (LLM), GPT-4. The current estimated price of a single H100 chip ranges from US$25,000 to US$40,000. It’s known that thousands of chips are needed to operate an LLM. NVIDIA has not disclosed the price of the H200. Servers equipped with the H200 are expected to be officially launched in the second quarter of next year.

What NVIDIA emphasized in the release of H200 is the memory semiconductor. The H200 applies the latest HBM3E, 5th generation HBM. HBM is a memory semiconductor that maximizes data processing capacity and speed by vertically stacking DRAM.

With HBM3E, NVIDIA has increased the data processing speed to 4.8 terabytes (TB) per second, and the memory capacity to 141 gigabytes (GB). An NVIDIA representative explained, “Compared to the A100, a model from two generations ago, it offers almost double the capacity and 2.4 times more bandwidth.”

The release of NVIDIA’s H200 is seen as a direct challenge to its competitor, AMD. In June, AMD introduced its latest MI300X, with the official launch slated for next month. When unveiling the MI300X, AMD emphasized that it has “2.4 times the memory density and 1.6 times the bandwidth compared to NVIDIA’s H100.”

Global AI companies such as Amazon Web Services, Microsoft, and Google are expected to use the H200 to enhance their services.

As the competition intensifies for the release of high-capacity AI accelerators, memory semiconductor companies like Samsung Electronics and SK hynix manufacturing HBM are expected to reap special benefits.

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution