GPU comparison
GB300 NVL72 vs H100
Head-to-head AI inference benchmark comparison of GB300 NVL72 (NVIDIA Blackwell) and H100 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.
Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
| Metric | Interactivity (tok/s/user) | Interactivity (tok/s/user) | Interactivity (tok/s/user) |
|---|---|---|---|
| Throughput (tok/s/gpu) | GB300 NVL72:7719.7H100:272.0 | GB300 NVL72:5403.3H100:77.4 | GB300 NVL72:2481.4H100:23.3 |
| Cost ($/M tok) | GB300 NVL72:$0.095H100:$1.339 | GB300 NVL72:$0.136H100:$4.655 | GB300 NVL72:$0.297H100:$15.632 |
| tok/s/MW | GB300 NVL72:3676036H100:157238 | GB300 NVL72:2573015H100:44718 | GB300 NVL72:1181606H100:13494 |
| Concurrency | GB300 NVL72:~2869H100:~605 | GB300 NVL72:~959H100:~57 | GB300 NVL72:~571H100:~8 |
Inference Performance
Inference performance metrics across different models, hardware configurations, and serving parameters.