Follow BigDATAwire:

March 22, 2022

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU, which will power datacenter-scale systems for HPC and AI workloads.

Nvidia’s first Hopper-based product, the H100 GPU, is manufactured on TSMC’s 4N process, leveraging a whopping 80 billion transistors — 68 percent more than the prior-generation 7nm A100 GPU. The H100 is the first GPU to support PCIe Gen5 and the first to utilize HBM3, enabling 3TB/s of memory bandwidth.

Named after computer scientist and U.S. Navy Rear Admiral Grace Hopper, the new GPU (in its SXM mezzanine form factor) provides 30 teraflops of standard IEEE FP64 performance, 60 teraflops of FP64 tensor core performance, and 60 teraflops of FP32 performance. A new numerical format introduced in Hopper, FP8 tensor core, delivers up to 4,000 teraflops of AI performance, according to Nvidia. See spec info and gen-to-gen comparisons below.

Hopper introduces built-in hardware acceleration for transformer models, which are widely used for natural language processing. The Hopper transformer engine dynamically chooses between 8-bit and 16-bit calculations, intelligently managing precision in the layers of the transformer network to deliver speedups without loss of accuracy, according to Nvidia.

“Hopper H100 is the biggest generational leap ever — 9x at-scale training performance over A100 and 30x large-language-model inference throughput,” Huang said in his keynote.

You can read the rest of the story here at HPCwire.

BigDATAwire