You are currently viewing Nvidia pronounces GB200 Blackwell AI chip, launching later this 12 months

Nvidia pronounces GB200 Blackwell AI chip, launching later this 12 months



Nvidia CEO Jensen Huang delivers a keynote handle throughout the Nvidia GTC Synthetic Intelligence Convention at SAP Heart on March 18, 2024 in San Jose, California. 

Justin Sullivan | Getty Pictures

Nvidia on Monday introduced a brand new technology of synthetic intelligence chips and software program for working synthetic intelligence fashions. The announcement, made throughout Nvidia’s developer’s convention in San Jose, comes because the chipmaker seeks to solidify its place because the go-to provider for AI firms.

Nvidia’s share value is up five-fold and complete gross sales have greater than tripled since OpenAI’s ChatGPT kicked off the AI increase in late 2022. Nvidia’s high-end server GPUs are important for coaching and deploying giant AI fashions. Corporations like Microsoft and Meta have spent billions of {dollars} shopping for the chips.

The brand new technology of AI graphics processors is called Blackwell. The primary Blackwell chip known as the GB200 and can ship later this 12 months. Nvidia is engaging its prospects with extra highly effective chips to spur new orders. Corporations and software program makers, for instance, are nonetheless scrambling to get their arms on the present technology of “Hopper” H100s and comparable chips.

“Hopper is implausible, however we’d like larger GPUs,” Nvidia CEO Jensen Huang stated on Monday on the firm’s developer convention in California.

Nvidia shares fell greater than 1% in prolonged buying and selling on Monday.

The corporate additionally launched revenue-generating software program referred to as NIM that may make it simpler to deploy AI, giving prospects one more reason to stay with Nvidia chips over a rising field of competitors.

Nvidia executives say that the corporate is turning into much less of a mercenary chip supplier and extra of a platform supplier, like Microsoft or Apple, on which different firms can construct software program.

“Blackwell’s not a chip, it is the identify of a platform,” Huang stated.

“The sellable industrial product was the GPU and the software program was all to assist individuals use the GPU in several methods,” stated Nvidia enterprise VP Manuvir Das in an interview. “In fact, we nonetheless do this. However what’s actually modified is, we actually have a industrial software program enterprise now.”

Das stated Nvidia’s new software program will make it simpler to run applications on any of Nvidia’s GPUs, even older ones that may be higher fitted to deploying however not constructing AI.

“In case you’re a developer, you have bought an fascinating mannequin you need individuals to undertake, when you put it in a NIM, we’ll be sure that it is runnable on all our GPUs, so that you attain lots of people,” Das stated.

Meet Blackwell, the successor to Hopper

Nvidia’s GB200 Grace Blackwell Superchip, with two B200 graphics processors and one Arm-based central processor.

Nvidia CEO Jensen Huang compares the dimensions of the brand new “Blackwell” chip versus the present “Hopper” H100 chip on the firm’s developer convention, in San Jose, California.

Nvidia

Amazon, Google, Microsoft, and Oracle will promote entry to the GB200 by means of cloud companies. The GB200 pairs two B200 Blackwell GPUs with one Arm-based Grace CPU. Nvidia stated Amazon Net Providers would construct a server cluster with 20,000 GB200 chips.

Nvidia stated that the system can deploy a 27-trillion-parameter mannequin. That is a lot bigger than even the largest fashions, similar to GPT-4, which reportedly has 1.7 trillion parameters. Many synthetic intelligence researchers imagine larger fashions with extra parameters and knowledge could unlock new capabilities.

Nvidia did not present a value for the brand new GB200 or the techniques it is utilized in. Nvidia’s Hopper-based H100 prices between $25,000 and $40,000 per chip, with entire techniques that value as a lot as $200,000, in accordance with analyst estimates.

Nvidia will even promote B200 graphics processors as a part of an entire system that takes up a complete server rack.

Nvidia inference microservice

Nvidia additionally introduced it is including a brand new product named NIM, which stands for Nvidia Inference Microservice, to its Nvidia enterprise software program subscription.

NIM makes it simpler to make use of older Nvidia GPUs for inference, or the method of working AI software program, and can permit firms to proceed to make use of the lots of of tens of millions of Nvidia GPUs they already personal. Inference requires much less computational energy than the preliminary coaching of a brand new AI mannequin. NIM permits firms that wish to run their very own AI fashions, as an alternative of shopping for entry to AI outcomes as a service from firms like OpenAI.

The technique is to get prospects who purchase Nvidia-based servers to enroll in Nvidia enterprise, which prices $4,500 per GPU per 12 months for a license.

Nvidia will work with AI firms like Microsoft or Hugging Face to make sure their AI fashions are tuned to run on all appropriate Nvidia chips. Then, utilizing a NIM, builders can effectively run the mannequin on their very own servers or cloud-based Nvidia servers with no prolonged configuration course of.

“In my code, the place I used to be calling into OpenAI, I’ll change one line of code to level it to this NIM that I bought from Nvidia as an alternative,” Das stated.

Nvidia says the software program will even assist AI run on GPU-equipped laptops, as an alternative of on servers within the cloud.




Source link