GPU comparison

H200 vs MI300X

Head-to-head AI inference benchmark comparison of H200 (NVIDIA Hopper) and MI300X (AMD CDNA 3). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
H200:2315.7MI300X:176.4
H200:2315.7MI300X:137.3
H200:1992.5MI300X:97.2
Cost ($/M tok)
H200:$0.169MI300X:$1.752
H200:$0.169MI300X:$2.217
H200:$0.194MI300X:$3.201
tok/s/MW
H200:1338561MI300X:98566
H200:1338561MI300X:76686
H200:1151755MI300X:54314
Concurrency
H200:~1024MI300X:~30
H200:~1024MI300X:~18
H200:~1024MI300X:~11

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.