GPU comparison

H100 vs MI325X

Head-to-head AI inference benchmark comparison of H100 (NVIDIA Hopper) and MI325X (AMD CDNA 3). Latency, throughput, and cost across LLM workloads. Use the chart controls below to switch models, sequences, precisions, and metrics — same interactions as the main inference chart.

Interpolated from real benchmark data. Edit target interactivity values below to compare at different operating points.
Metric
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Interactivity (tok/s/user)
Throughput (tok/s/gpu)
H100:293.2MI325X:258.8
H100:284.5MI325X:167.1
H100:277.9MI325X:93.9
Cost ($/M tok)
H100:$1.242MI325X:$1.338
H100:$1.282MI325X:$2.127
H100:$1.312MI325X:$3.664
tok/s/MW
H100:169463MI325X:118710
H100:164434MI325X:76630
H100:160636MI325X:43092
Concurrency
H100:~781MI325X:~41
H100:~708MI325X:~22
H100:~654MI325X:~11

Inference Performance

Inference performance metrics across different models, hardware configurations, and serving parameters.