GPU comparison
B300 vs H200
Head-to-head AI inference benchmark comparison of B300 (NVIDIA Blackwell) and H200 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.
Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
| Metric | Interactivity (tok/s/user) | Interactivity (tok/s/user) | Interactivity (tok/s/user) |
|---|---|---|---|
| Throughput (tok/s/gpu) | B300:5900.6H200:1652.8 | B300:2308.2H200:635.1 | B300:1191.8H200:292.1 |
| Cost ($/M tok) | B300:$0.111H200:$0.236 | B300:$0.275H200:$0.593 | B300:$0.537H200:$1.329 |
| tok/s/MW | B300:2719181H200:955367 | B300:1063708H200:367094 | B300:549226H200:168837 |
| Concurrency | B300:~301H200:~82 | B300:~55H200:~23 | B300:~32H200:~21 |
Inference Performance
Inference performance metrics across different models, hardware configurations, and serving parameters.