November 28, 2024

Nvidia: Nvidia unveils new chip that will bring down cost of running ChatGPT and other AI models

[ad_1]

Running generative AI models like ChatGPT and Google Bard among others requires GPUs that are advanced and fast. Nvidia is a company at the forefront of it. The company has announced the next-generation of Nvidia GH200 Grace Hooper platform. It is based on a new Grace Hopper Superchip with the world’s first HBM3e processor built for generative AI.

The dual configuration — which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering — comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282GB of the latest HBM3e memory technology, said the company in a press release.
The Grace Hopper is also claimed to be more energy efficient than previous chips, which could lead to lower operating costs for data centres that use it. This is important because LLMs are becoming increasingly popular, and the cost of running them is a major barrier to adoption.

How it may help lower costs
ChatGPT is a large language model developed by OpenAI that is capable of generating human-quality text. It is used in a variety of applications, including customer service, content creation, and research. The Grace Hopper chip could make it more affordable for businesses to use ChatGPT and other LLMs, which could lead to wider adoption of these technologies. “To meet surging demand for generative AI, data centres require accelerated computing platforms with specialized needs,” said Jensen Huang, founder and CEO of NVIDIA. “The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.”
Leading system manufacturers are expected to deliver systems based on the platform in Q2 of calendar year 2024, said Nvidia in a press release.



[ad_2]

Source link