Blackwell (microarchitecture)

Blackwell is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Hopper and Ada Lovelace microarchitectures.

Named after statistician and mathematician David Blackwell, the name of the Blackwell architecture was leaked in 2022 with the B40 and B100 accelerators being confirmed in October 2023 with an official Nvidia roadmap shown during an investors presentation. It was officially announced at Nvidia's GTC 2024 keynote on March 18, 2024.

History
In March 2022, Nvidia announced Hopper architecture for datacenter for AI accelerators. Demand for Hopper products was high throughout 2023's AI hype. The lead time from order to delivery of H100-based servers was between 36 and 52 weeks due to shortages and high demand. Nvidia reportedly sold 500,000 Hopper-based H100 accelerators in Q3 2023 alone. Nvidia's AI dominance with Hopper products led to the company increasing its market capitalization to over $2 trillion, behind only Microsoft and Apple.

The Blackwell architecture is named after American mathematician David Blackwell who was known for his contributions to the mathematical fields of game theory, probability theory, information theory, and statistics. These areas have influenced or are implemented in transformer-based generative AI model designs or their training algorithms. Blackwell was the first African American scholar to be inducted into the National Academy of Sciences.

In Nvidia's October 2023 Investor Presentation, its datacenter roadmap was updated to include reference to its B100 and B40 accelerators and the Blackwell architecture. Previously, the successor to Hopper was simply named on roadmaps as "Hopper-Next". Nvidia's updated roadmap emphasized the move from a two-year release cadence for datacenter products to yearly releases targeted for x86 and ARM systems.

At the Graphics Technology Conference (GTC) on March 18, 2024, Nvidia officially announced the Blackwell architecture with focus placed on its B100 and B200 datacenter accelerators and associated products, such as the eight-GPU HGX B200 board and the 72-GPU NVL72 rack-scale system. Based on published power and performance figures, it appears that the B100 and B200 are the same silicon, but the former operates at 75% of the B200's clock rate. Nvidia CEO Jensen Huang said that with Blackwell, "we created a processor for the generative AI era" and emphasized the overall Blackwell platform combining Blackwell accelerators with Nvidia's ARM-based Grace CPU. Nvidia touted endorsements of Blackwell from the CEOs of Google, Meta, Microsoft, OpenAI and Oracle. The keynote did not mention gaming.

Architecture
Blackwell is an architecture designed for both data-center compute applications and for gaming and workstation applications with dedicated dies for each purpose. Purported leaks indicate that the laptop dies will be code-named GN22-Xx and the corresponding GeForce RTX Mobile GPU cards will be code-named GB20x. Similar to the latter notation, GB200 and GB100 are the brand names of Nvidia's Grace Blackwell data-center superchips, modules combining two Blackwell GPUs and one Arm-based Grace processor.

Process node
Blackwell is fabricated on the custom 4NP node from TSMC. 4NP is an enhancement of the 4N node used for the Hopper and Ada Lovelace architectures. The Nvidia-specific 4NP process likely adds metal layers to the standard TSMC N4P technology. Each of the two compute die in the data-center B100/B200 chips 104 billion transistors, a 30% increase over the 80 billion transistors in the previous generation Hopper. As Blackwell cannot reap the benefits that come with a major process node advancement, it must achieve power efficiency and performance gains through underlying architectural changes.

The compute die in the data-center accelerators is at the reticle limit of semiconductor fabrication. The reticle limit in semiconductor fabrication is the physical size limit that lithography machines can etch a silicon die. Previously, Nvidia had nearly hit TSMC's reticle limit with GH100's 814 mm2 die. In order to not be constrained by die size, Nvidia's B100 accelerator utilizes two GB100 dies in a single package, connected with a 10 TB/s link that Nvidia calls the NV-High Bandwidth Interface (NV-HBI). NV-HBI is based on the NVLink 5.0 protocol. Nvidia CEO Jensen Huang claimed in an interview with CNBC that Nvidia had spent around $10 billion in research and development for Blackwell's NV-HBI die interconnect. Veteran semiconductor engineer Jim Keller, who had worked on AMD's K7, K12 and Zen architectures, criticized this figure and claimed that the same outcome could be achieved for $1 billion through using Ultra Ethernet rather than the proprietary NVLink system. The two connected compute dies are able to act like a large monolithic piece of silicon with full cache coherency between both dies. The dual die package totals 208 billion transistors. Those two dies are placed on top on a silicon interposer produced using TSMC's CoWoS-L 2.5D packaging technique.

CUDA cores
CUDA Compute Capability 10.0 is added with Blackwell.

Tensor cores
The Blackwell architecture introduces fifth-generation Tensor Cores for AI compute and performing floating-point calculations. In the data center, Blackwell adds support for FP4 and FP6 data types. The previous Hopper architecture introduced the Transformer Engine, software to facilitate quantization of higher-precision models (e.g., FP32) to lower precision, for which Hopper has greater throughput. Blackwell's second generation Transformer Engine adds support for the newer, less-precise FP4 and FP6 types. Using 4-bit data allows greater efficiency and throughput for model inference during generative AI training. Nvidia claims 20 petaflops (excluding the 2x gain the company claims for sparsity) of FP4 compute for the dual-GPU GB200 superchip.