H100 SXM vs Instinct MI300X
Side-by-side comparison of the NVIDIA H100 SXM and the AMD Instinct MI300X for AI inference workloads.
Specifications
| Spec | H100 SXM | Instinct MI300X |
|---|---|---|
| Generation | hopper | cdna3 |
| Memory Type | HBM3 | HBM3 |
| VRAM | 80 GB | 192 GB |
| Memory Bandwidth | 3,350 GB/s | 5,300 GB/s |
| BF16 TFLOPS | 990 | 1,307 |
| FP16 TFLOPS | 990 | 1,307 |
| FP8 TFLOPS | 1,979 | 2,614 |
| INT8 TOPS | 1,979 | 2,614 |
| TDP | 700 W | 750 W |
| Interconnect | nvlink | infinity-fabric |
| NVLink Bandwidth | 900 GB/s | N/A |
| Max GPUs per Node | 8 | 8 |
| PCIe Gen | Gen 5 | Gen 5 |
| CUDA Compute Capability | 9 | N/A |
Pricing
H100 SXM
| Provider | On-Demand | Reserved | Spot |
|---|---|---|---|
| runpod | $4.18/hr | - | $3.29/hr |
| lambda | $2.49/hr | $1.89/hr | - |
| coreweave | $3.79/hr | $2.57/hr | - |
| aws | $5.12/hr | $3.59/hr | - |
| gcp | $4.85/hr | $3.40/hr | - |
| azure | $4.98/hr | $3.49/hr | - |
| vast ai | $3.40/hr | - | $2.50/hr |
| tensordock | $3.29/hr | - | $2.49/hr |
| fluidstack | $2.85/hr | - | $2.10/hr |
Instinct MI300X
| Provider | On-Demand | Reserved | Spot |
|---|---|---|---|
| runpod | $3.49/hr | - | $2.59/hr |
| lambda | $2.49/hr | - | - |
| coreweave | $3.39/hr | $2.49/hr | - |
| vast ai | $2.79/hr | - | $1.99/hr |
| tensordock | $2.69/hr | - | $1.99/hr |
| fluidstack | $2.39/hr | - | $1.79/hr |
Cheapest available rate: H100 SXM at $1.89/hr vs Instinct MI300X at $1.79/hr — Instinct MI300X is +6% cheaper
Efficiency Metrics
TFLOPS / Watt
1.4
H100 SXM
1.7
Instinct MI300X
BF16
VRAM / Dollar
42.3
H100 SXM
107.3
Instinct MI300X
GB/$/hr
Bandwidth / Watt
4.8
H100 SXM
7.1
Instinct MI300X
GB/s/W
Models (FP16, 1 GPU)
182.0
H100 SXM
221.0
Instinct MI300X
Model Compatibility (FP16, Single GPU)
Only on H100 SXM (0)
None
Both (182)
- Yi 1.5 34B
- Yi 1.5 9B
- Yi Coder 9B
- GTE Qwen2 7B
- Marco O1
- Qwen 1.5 MoE A2.7B
- Qwen 2 Audio 7B
- Qwen 2.5 14B
- Qwen 2.5 32B
- Qwen 2.5 3B
- Qwen 2.5 Coder 32B
- OLMo 2 13B
- OLMo 2 7B
- Amazon Nova Lite
- OpenELM 3B
- BGE Large EN v1.5
- BGE M3
- Baichuan 2 13B
- OctoCoder 15B
- StarCoder2 15B
- +162 more
Only on Instinct MI300X (39)
- Jamba 1.5 Mini
- Amazon Nova Pro
- Code Llama 70B
- Dolphin 2.9 72B
- DeepSeek R1 Distill 70B
- Falcon 40B
- Llama 3 70B 1M Context
- Llama 2 70B
- Llama 3 70B
- Llama 3.1 70B
- Llama 3.3 70B
- WizardMath 70B
- Mixtral 8x7B
- Hermes 3 70B
- HelpSteer2 Llama 3.1 70B
- Llama 3.1 Nemotron 51B
- Llama 3.1 Nemotron 70B Instruct
- Llama 3.1 Nemotron 70B Reward
- Nemotron 70B
- VILA 1.5 40B
- +19 more
Summary
The H100 SXM (hopper generation) offers 80GB of HBM3 with 990 BF16 TFLOPS and 3,350 GB/s memory bandwidth at 700W TDP.
The Instinct MI300X (cdna3 generation) offers 192GB of HBM3 with 1,307 BF16 TFLOPS and 5,300 GB/s memory bandwidth at 750W TDP.
The Instinct MI300X has +140% more VRAM, allowing it to run larger models without multi-GPU setups.
From a cost perspective, the Instinct MI300X is more affordable at $1.79/hr vs $1.89/hr for the H100 SXM.