The H200 NVL connects four cards, double the number of its predecessor, the H100 NVL. It also offers the option of liquid cooling; the H100 did not. Instead of using PCIe to communicate, H200 NVL uses NVLink interconnect bridge, which enables a bidirectional throughput of 900 GB/s per GPU, seven times that of PCIe 5.
The H200 NVL is intended for enterprises to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. It has 1.5x more memory and 1.2x more bandwidth over the H100 NVL. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.
Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are all expected to deliver a wide range of configurations supporting H200 NVL. Additionally, H200 NVL will be available in platforms from Aivres, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, MSI, Pegatron, QCT, Wistron and Wiwynn.