Nvidia unveils GH200 Superchips for 'most complex AI workloads'
The world's leading supplier of chips for artificial intelligence (AI) applications, Nvidia, has unveiled its next generation of superchips created to handle the "most complex generative AI workloads," the company said in a press release. Dubbed GH200 GraceHopper, the platform also features the world's first HBM3e processor.
The new superchip has been designed by combining Nvidia's Hopper platform, which houses the graphic processing unit (GPU), with the Grace CPU platform, which handles the processing needs. Both these platforms were named in honor of Grace Hopper, a pioneer of computer programming, and by combining them into one pioneering superchip, Nvidia's homage to the great personality continues.
Traditionally, GPUs have been associated with high-end graphic processing in computers and gaming consoles. However, their superior computation abilities have been repurposed for applications such as cryptocurrency mining and, more recently, training AI models.
The power of large computers
Interesting Engineering has previously reported how service providers like Microsoft's Azure put together large computing systems using Nvidia's chips to cater to the needs of OpenAI.
Microsoft spent millions sourcing Nvidia's A100 chips but also built the necessary infrastructure to share the workload of large datasets for training the model. OpenAI used this to develop GPT models that powered the hugely popular ChatGPT.
As the manufacturer of these chips, Nvidia now wants to build such large data processing systems on its own. It has also launched the Nvidia MGX platform using which businesses can train and run their own AI models in-house.
During this time, Nvidia's GPU offering moved to the more powerful H100 Hopper chips, and the company is now strengthening its offering but combining them with CPU cores.
Grace Hopper GH200
Nvidia states that it has achieved the manufacture of its super chip by using its proprietary NVLink technology that allows chip-to-chip (C2C) interconnections. This helps the GPU gain full access to the CPU's memory and deliver 1.2 TB of fast memory in this configuration.

The GH200 also features the world's first HBM3e processor, 50 percent faster than the HBM3 used for computations today. The press release added that a single server consists of 144 Neoverse cores and can deliver eight petaflops of AI performance. With a combined bandwidth of 10TB/sec, the GH200 platform can process AI models that are 3.5 times larger and 3x faster than previous platforms from Nvidia.
The company that briefly joined the $1 trillion valuation club earlier this year is already the market leader with more than 90 percent market share in chips supply for AI and related applications.
GPUs are not just needed to train the AI models but also to run them after that. The demand for these chips is only expected to increase as AI becomes more mainstream, and it is hardly a surprise that not just chipmakers such as AMD but even tech giants like Google and Amazon are developing their offerings in this space.