Skip to content

H100 SXM vs Instinct MI300X

Side-by-side comparison of the NVIDIA H100 SXM and the AMD Instinct MI300X for AI inference workloads.

Specifications

SpecH100 SXMInstinct MI300X
Generationhoppercdna3
Memory TypeHBM3HBM3
VRAM80 GB192 GB
Memory Bandwidth3,350 GB/s5,300 GB/s
BF16 TFLOPS9901,307
FP16 TFLOPS9901,307
FP8 TFLOPS1,9792,614
INT8 TOPS1,9792,614
TDP700 W750 W
Interconnectnvlinkinfinity-fabric
NVLink Bandwidth900 GB/sN/A
Max GPUs per Node88
PCIe GenGen 5Gen 5
CUDA Compute Capability9N/A

Pricing

H100 SXM

ProviderOn-DemandReservedSpot
runpod$4.18/hr-$3.29/hr
lambda$2.49/hr$1.89/hr-
coreweave$3.79/hr$2.57/hr-
aws$5.12/hr$3.59/hr-
gcp$4.85/hr$3.40/hr-
azure$4.98/hr$3.49/hr-
vast ai$3.40/hr-$2.50/hr
tensordock$3.29/hr-$2.49/hr
fluidstack$2.85/hr-$2.10/hr

Instinct MI300X

ProviderOn-DemandReservedSpot
runpod$3.49/hr-$2.59/hr
lambda$2.49/hr--
coreweave$3.39/hr$2.49/hr-
vast ai$2.79/hr-$1.99/hr
tensordock$2.69/hr-$1.99/hr
fluidstack$2.39/hr-$1.79/hr

Cheapest available rate: H100 SXM at $1.89/hr vs Instinct MI300X at $1.79/hrInstinct MI300X is +6% cheaper

Efficiency Metrics

TFLOPS / Watt

1.4

H100 SXM

vs

1.7

Instinct MI300X

BF16

VRAM / Dollar

42.3

H100 SXM

vs

107.3

Instinct MI300X

GB/$/hr

Bandwidth / Watt

4.8

H100 SXM

vs

7.1

Instinct MI300X

GB/s/W

Models (FP16, 1 GPU)

182.0

H100 SXM

vs

221.0

Instinct MI300X

Model Compatibility (FP16, Single GPU)

Only on H100 SXM (0)

None

Both (182)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • OctoCoder 15B
  • StarCoder2 15B
  • +162 more

Only on Instinct MI300X (39)

  • Jamba 1.5 Mini
  • Amazon Nova Pro
  • Code Llama 70B
  • Dolphin 2.9 72B
  • DeepSeek R1 Distill 70B
  • Falcon 40B
  • Llama 3 70B 1M Context
  • Llama 2 70B
  • Llama 3 70B
  • Llama 3.1 70B
  • Llama 3.3 70B
  • WizardMath 70B
  • Mixtral 8x7B
  • Hermes 3 70B
  • HelpSteer2 Llama 3.1 70B
  • Llama 3.1 Nemotron 51B
  • Llama 3.1 Nemotron 70B Instruct
  • Llama 3.1 Nemotron 70B Reward
  • Nemotron 70B
  • VILA 1.5 40B
  • +19 more

Summary

The H100 SXM (hopper generation) offers 80GB of HBM3 with 990 BF16 TFLOPS and 3,350 GB/s memory bandwidth at 700W TDP.

The Instinct MI300X (cdna3 generation) offers 192GB of HBM3 with 1,307 BF16 TFLOPS and 5,300 GB/s memory bandwidth at 750W TDP.

The Instinct MI300X has +140% more VRAM, allowing it to run larger models without multi-GPU setups.

From a cost perspective, the Instinct MI300X is more affordable at $1.79/hr vs $1.89/hr for the H100 SXM.

More GPU Comparisons