Nvidia launches new 'Grace Hopper' super chips, with CPU and GPU
Nvidia, the chipmaker behind the surge of artificial intelligence (AI) models, has now unveiled its new CPU+GPU chip, Grace Hopper, which will herald the next generation of AI models and chatbots.
Graphic processing units (GPUs), typically designed to accelerate graphic rendering in computer games, have greater computing power than central processing unit (CPU) chips. Since GPUs can perform multiple calculations in parallel compared to the CPUs, tech companies began using them to train the AI models they were developing.
Back in 2020, Nvidia unveiled its A100 GPU chip used by companies to train the very first iterations of their conversing chatbots and image generators. However, in just a couple of years, its much superior H100 Hopper chips have become crucial components of data centers that are powering popular chatbots like ChatGPT. Now the company has unveiled a monster chip with both CPU and GPU onboard.
What are Grace Hopper chips from Nvidia?
According to a press release, Nvidia has created its new chip by combining its Hopper GPU platform with the Grace CPU platform (both named after Grace Hopper, a pioneer of computer programming). The two chips have been connected using Nvidia's NVLink chip-to-chip (C2C) interconnect technology.
Dubbed GH200, the super chip has 528 GPU tensor cores which can support 480 GB of CPU RAM and 96 GB of GPU RAM. The GPU memory bandwidth on the GH200 is 4TB per second, which is twice as much as the A100 chips.

The super chip also boasts 900GB/s of the coherent memory interface, which is seven times faster than the latest generation PCIe, which has only become available this year. Along with running all Nvidia software such as HPC SDK, Nvidia AI, and Omniverse, the GH200 has 30 times higher aggregate memory bandwidth compared to the A100 chips.
What will Grace Hopper chips be used for?
Nvidia, well on its way to becoming a trillion-dollar company, expects the GH200 chips to be used for giant-scale AI and high-performance computing (HPC) applications. At this point in time, one can only imagine AI models and chatbots that are faster and more accurate being built with this superior technology.
The company also plans to use them to build a new exaflop supercomputer capable of performing 1018 floating point operations per second (FLOPS). Two hundred fifty-six of the GH200 chips will be put together to function as one large GPU and have 144 TB of shared memory, about 500 times that of the A100.
“Generative AI is rapidly transforming businesses, unlocking new opportunities, and accelerating discovery in healthcare, finance, business services, and many more industries,” said Ian Buck, vice president of accelerated computing at NVIDIA, in a press release. “With Grace Hopper Superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data."
Global hyperscalers and supercomputing centers in the U.S. and Europe will get access to the GH200-powered systems later this year, the release added.