GPU comparison

B200 vs H100

Head-to-head AI inference benchmark comparison of B200 (NVIDIA Blackwell) and H100 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
B200:5078.3H100:274.3
B200:1029.8H100:81.8
B200:528.1H100:23.3
Cost ($/M tok)
B200:$0.108H100:$1.328
B200:$0.532H100:$4.420
B200:$1.043H100:$15.632
tok/s/MW
B200:2340217H100:158541
B200:474582H100:47281
B200:243386H100:13494
Concurrency
B200:~1746H100:~624
B200:~62H100:~62
B200:~201H100:~8

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.