GPU comparison

GB200 NVL72 vs H100

Head-to-head AI inference benchmark comparison of GB200 NVL72 (NVIDIA Blackwell) and H100 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
GB200 NVL72:5382.7H100:269.4
GB200 NVL72:4419.4H100:77.4
GB200 NVL72:2387.3H100:23.3
Cost ($/M tok)
GB200 NVL72:$0.114H100:$1.351
GB200 NVL72:$0.139H100:$4.655
GB200 NVL72:$0.257H100:$15.632
tok/s/MW
GB200 NVL72:2563185H100:155721
GB200 NVL72:2104473H100:44718
GB200 NVL72:1136786H100:13494
Concurrency
GB200 NVL72:~3266H100:~584
GB200 NVL72:~1006H100:~57
GB200 NVL72:~612H100:~8

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.