GPU comparison

H100 vs H200

Head-to-head AI inference benchmark comparison of H100 (NVIDIA Hopper) and H200 (NVIDIA Hopper). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
H100:739.5H200:1778.8
H100:280.6H200:889.8
H100:154.3H200:565.9
Cost ($/M tok)
H100:$0.557H200:$0.220
H100:$1.366H200:$0.440
H100:$2.453H200:$0.669
tok/s/MW
H100:427454H200:1028222
H100:162224H200:514327
H100:89204H200:327139
Concurrency
H100:~132H200:~115
H100:~41H200:~98
H100:~17H200:~9

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.