GPU comparison

B200 vs H200

Head-to-head AI inference benchmark comparison of B200 (NVIDIA Blackwell) and H200 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
B200:2700.6H200:539.5
B200:536.9H200:138.0
B200:308.5H200:39.6
Cost ($/M tok)
B200:$0.200H200:$0.751
B200:$1.027H200:$2.877
B200:$1.759H200:$9.940
tok/s/MW
B200:1244494H200:311843
B200:247402H200:79785
B200:142178H200:22869
Concurrency
B200:~1030H200:~390
B200:~215H200:~67
B200:~8H200:~14

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.