Skip to content

A100 80GB SXM vs Instinct MI300X

Side-by-side comparison of the NVIDIA A100 80GB SXM and the AMD Instinct MI300X for AI inference workloads.

Specifications

SpecA100 80GB SXMInstinct MI300X
Generationamperecdna3
Memory TypeHBM2eHBM3
VRAM80 GB192 GB
Memory Bandwidth2,039 GB/s5,300 GB/s
BF16 TFLOPS3121,307
FP16 TFLOPS3121,307
FP8 TFLOPS3122,614
INT8 TOPS6242,614
TDP400 W750 W
Interconnectnvlinkinfinity-fabric
NVLink Bandwidth600 GB/sN/A
Max GPUs per Node88
PCIe GenGen 4Gen 5
CUDA Compute Capability8N/A

Pricing

A100 80GB SXM

ProviderOn-DemandReservedSpot
runpod$2.72/hr-$2.09/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
aws$3.67/hr$2.39/hr-
gcp$3.67/hr$2.48/hr-
azure$3.67/hr$2.45/hr-
vast ai$1.80/hr-$1.30/hr
tensordock$1.79/hr-$1.29/hr
fluidstack$1.69/hr-$1.19/hr

Instinct MI300X

ProviderOn-DemandReservedSpot
runpod$3.49/hr-$2.59/hr
lambda$2.49/hr--
coreweave$3.39/hr$2.49/hr-
vast ai$2.79/hr-$1.99/hr
tensordock$2.69/hr-$1.99/hr
fluidstack$2.39/hr-$1.79/hr

Cheapest available rate: A100 80GB SXM at $1.19/hr vs Instinct MI300X at $1.79/hrA100 80GB SXM is +50% cheaper

Efficiency Metrics

TFLOPS / Watt

0.8

A100 80GB SXM

vs

1.7

Instinct MI300X

BF16

VRAM / Dollar

67.2

A100 80GB SXM

vs

107.3

Instinct MI300X

GB/$/hr

Bandwidth / Watt

5.1

A100 80GB SXM

vs

7.1

Instinct MI300X

GB/s/W

Models (FP16, 1 GPU)

182.0

A100 80GB SXM

vs

221.0

Instinct MI300X

Model Compatibility (FP16, Single GPU)

Only on A100 80GB SXM (0)

None

Both (182)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • OctoCoder 15B
  • StarCoder2 15B
  • +162 more

Only on Instinct MI300X (39)

  • Jamba 1.5 Mini
  • Amazon Nova Pro
  • Code Llama 70B
  • Dolphin 2.9 72B
  • DeepSeek R1 Distill 70B
  • Falcon 40B
  • Llama 3 70B 1M Context
  • Llama 2 70B
  • Llama 3 70B
  • Llama 3.1 70B
  • Llama 3.3 70B
  • WizardMath 70B
  • Mixtral 8x7B
  • Hermes 3 70B
  • HelpSteer2 Llama 3.1 70B
  • Llama 3.1 Nemotron 51B
  • Llama 3.1 Nemotron 70B Instruct
  • Llama 3.1 Nemotron 70B Reward
  • Nemotron 70B
  • VILA 1.5 40B
  • +19 more

Summary

The A100 80GB SXM (ampere generation) offers 80GB of HBM2e with 312 BF16 TFLOPS and 2,039 GB/s memory bandwidth at 400W TDP.

The Instinct MI300X (cdna3 generation) offers 192GB of HBM3 with 1,307 BF16 TFLOPS and 5,300 GB/s memory bandwidth at 750W TDP.

The Instinct MI300X has +140% more VRAM, allowing it to run larger models without multi-GPU setups.

From a cost perspective, the A100 80GB SXM is more affordable at $1.19/hr vs $1.79/hr for the Instinct MI300X.

More GPU Comparisons