Sapphire Rapids

Sapphire Rapids is a codename for Intel's server (fourth generation Xeon Scalable) and workstation (Xeon W-2400 and Xeon W-3400) processors based on the Golden Cove microarchitecture and produced using Intel 7. It features up to 60 cores and an array of accelerators, and it is the first generation of Intel server and workstation processors to use a chiplet design.

Sapphire Rapids is part of the Eagle Stream server platform. In addition, it powers Aurora, an exascale supercomputer in the United States, at Argonne National Laboratory.

History
Sapphire Rapids has been a long-standing Intel project along Alder Lake in development for over five years and has been subjected to many delays. It was first announced by Intel at their Investor Meeting in May 2019 with the intention of Sapphire Rapids succeeding Ice Lake and Cooper Lake in 2021. Intel again announced details on Sapphire Rapids in their August 2021 Architecture Day presentation with no mention of a launch date.

Intel CEO Pat Gelsinger tacitly blamed the previous Intel leadership as a reason for Sapphire Rapid's many delays. One industry analyst firm claimed that Intel was having problems with yields from its Intel 7 node with yields of 50–60% on higher core-count silicon. Sapphire Rapids was originally scheduled for a launch in the first half of 2022. It was later scheduled for release in Q4 2022 but was again delayed to early 2023. The specific announcement date of January 10, 2023 was not revealed by Intel until November 2022.

The server processor lineup was released on January 10, 2023, and the workstation processor lineup was released on February 15, 2023. Those processors were available for shipping on March 14 of that year. Intel shipped the millionth of this generation Xeon processors in 2023.

CPU

 * Up to 60 Golden Cove CPU cores per package
 * Up to 15 cores per tile, a 60 core Xeon Platinum 8490H uses 4 dies populated with 15 cores to have 60 cores in total
 * AVX512-FP16
 * TSX Suspend Load Address Tracking
 * Advanced Matrix Extensions (AMX)
 * Trust Domain Extensions (TDX), a collection of technologies to help deploy hardware-isolated virtual machines (VMs) called trust domains (TDs)

Accelerators

 * In-Field Scan (IFS), a technology that allows for testing the processor for potential hardware faults without taking it completely offline
 * Data Streaming Accelerator (DSA), allows for speeding up data copy and transformation between different kinds of storage
 * QuickAssist Technology (QAT), allows for improved performance of compression and encryption tasks
 * Dynamic Load Balancer (DLB), allows for offloading tasks of load balancing, packet prioritization and queue management
 * In-Memory Analytics Accelerator (IAA), allows accelerating in-memory databases and big data analytics

Not all accelerators are available in all processor models. Some accelerators are available under the Intel On Demand program, also known as Software Defined Silicon (SDSi), where a license is required to activate a given accelerator that is physically present in the processor. The license can be obtained as a one-time purchase or as a paid subscription. Activating the license requires support in the operating system. A driver with the necessary support was added in Linux kernel version 6.2.

I/O

 * PCI Express 5.0
 * Direct Media Interface 4.0
 * 8-channel DDR5 ECC memory support up to DDR5-4800, up to 2 DIMMs per channel
 * On-package High Bandwidth Memory 2.0e memory as L4 cache on Xeon Max models
 * Compute Express Link 1.1

Die configurations
Sapphire Rapids come in two varieties: the low-core-count variety uses a single die (MCC), and the high-core-count variety uses multiple dies on a single package (XCC).

XCC multi-die configuration

 * Multi-chiplet chip with four tiles linked by 2.5D Embedded Multi-die Interconnect Bridges. Each tile is a 400mm2 system on a chip, providing both compute cores and I/O.
 * Each tile contains 15 Golden Cove cores, and a single UPI link
 * Each tile's memory controller provides two channels of DDR5 ECC supporting 4 DIMMs (2 per channel) and 1 TB of memory with a maximum of 8 channels, 16 DIMMs, and 4 TB memory across 4 tiles
 * A tile provides up to 32 PCIe 5.0 lanes, but one of the eight PCIe controllers of a CPU is usually reserved for DMI, resulting in a maximum of 112 non-chipset lanes. This maximum is only reached in the W-3400 series processors, while the server processors have 80 (20 per tile).

Sapphire Rapids-HBM (High Bandwidth Memory/Xeon Max Series)
Xeon Max processors contain 64 GB of High Bandwidth Memory.

Sapphire Rapids-SP (Scalable Performance)
With its maximum of 60 cores, Sapphire Rapids-SP competes with AMD's Epyc 8004/9004 Genoa with up to 96 cores and Bergamo with up to 128 cores. Sapphire Rapids Xeon server processors are scalable from single-socket configurations up to 8 socket configurations.

Suffixes to denote:


 * +: Includes 1 of each of the four accelerators: DSA, IAA, QAT, DLB
 * H: Database and analytics workloads, supports 4S (Xeon Gold) and/or 8S (Xeon Platinum) configurations and includes all of the accelerators
 * M: Media transcode workloads
 * N: Network/5G/Edge workloads (High TPT/Low Latency), some are uniprocessor
 * P: Cloud and infrastructure as a service (IaaS) workloads
 * Q: Liquid cooling
 * S: Storage & Hyper-converged infrastructure (HCI) workloads
 * T: Long-life use/High thermal case
 * U: Uniprocessor (some workload-specific SKUs may also be uniprocessor)
 * V: Optimized for cloud and software as a service (SaaS) workloads, some are uniprocessor
 * Y: Speed Select Technology-Performance Profile (SST-PP) enabled (some workload-specific SKUs may also support SST-PP)
 * Y+: Speed Select Technology-Performance Profile (SST-PP) enabled and includes 1 of each of the accelerators.

Sapphire Rapids-WS (Workstation)
With its maximum of 56 cores, Sapphire Rapids-WS competes with AMD's Threadripper PRO 5000WX Chagall with up to 64 cores. Like Intel's Core product segmentation into i3, i5, i7 and i9, Sapphire Rapids-WS is labeled Xeon w3, w5, w7 and w9. Sapphire Rapids-WS was unveiled in February 2023, and was made available for OEMs in March. CPUs with "X" suffix have its multiplier unlocked for overclocking.


 * No suffix letter: Locked clock multiplier
 * X: Unlocked clock multiplier (adjustable with no ratio limit)
 * Xeon W-2400 uses a monolithic design and supports up to 64 PCI Express 5.0 lanes, while Xeon W-3400 uses a chiplet design and supports up to 112 lanes. Both support 8 DMI 4.0 lanes.