At the GTC 2022 conference at the end of March, NVIDIA officially released the new generation of top-level computing core GH100 and computing card H100 based on hopper architecture. A month and a half later, we finally saw the true face of this new card It is still the traditional SXM style specification, but the overall layout has changed greatly compared with the previous generation ampere A100, and the GH100 core is naturally in the middle, surrounded by six hbm3 memory / video memory, with a total capacity of 80GB
GH100 core adopts TSMC 4nm manufacturing process and cowos 2.5D packaging technology to integrate 80 billion transistors with a core area of 814 mm2**
It has 18432 CUDA cores, 576 tensor cores, 60MB L2 cache, supports 6144 bit wide six hbm3 / hbm2e, supports PCIe 5.0, and supports the fourth generation nvlink bus.
The H100 computing card has two styles: SXM and PCIe 5.0, in which the SXM version has 15872 CUDA cores and 528 tensor cores, and the PCIe 5.0 version has 14952 CUDA cores and 456 tensor cores, the power consumption is up to 700W**
The time of listing is uncertain, but Japan has recently opened the pre-sale of PCIe version, the price is as high as 4745950 yen, about 242000 yuan**
The SXM version may be more expensive.