Skip to content

H200 SXM vs A100 80GB SXM

Side-by-side comparison of the NVIDIA H200 SXM and the NVIDIA A100 80GB SXM for AI inference workloads.

Specifications

SpecH200 SXMA100 80GB SXM
Generationhopperampere
Memory TypeHBM3eHBM2e
VRAM141 GB80 GB
Memory Bandwidth4,800 GB/s2,039 GB/s
BF16 TFLOPS990312
FP16 TFLOPS990312
FP8 TFLOPS1,979312
INT8 TOPS1,979624
TDP700 W400 W
Interconnectnvlinknvlink
NVLink Bandwidth900 GB/s600 GB/s
Max GPUs per Node88
PCIe GenGen 5Gen 4
CUDA Compute Capability98

Pricing

H200 SXM

ProviderOn-DemandReservedSpot
lambda$3.49/hr$2.69/hr-
coreweave$4.25/hr$3.19/hr-
runpod$4.69/hr--
tensordock$3.80/hr-$2.90/hr

A100 80GB SXM

ProviderOn-DemandReservedSpot
runpod$2.72/hr-$2.09/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
aws$3.67/hr$2.39/hr-
gcp$3.67/hr$2.48/hr-
azure$3.67/hr$2.45/hr-
vast ai$1.80/hr-$1.30/hr
tensordock$1.79/hr-$1.29/hr
fluidstack$1.69/hr-$1.19/hr

Cheapest available rate: H200 SXM at $2.69/hr vs A100 80GB SXM at $1.19/hrA100 80GB SXM is +126% cheaper

Efficiency Metrics

TFLOPS / Watt

1.4

H200 SXM

vs

0.8

A100 80GB SXM

BF16

VRAM / Dollar

52.4

H200 SXM

vs

67.2

A100 80GB SXM

GB/$/hr

Bandwidth / Watt

6.9

H200 SXM

vs

5.1

A100 80GB SXM

GB/s/W

Models (FP16, 1 GPU)

193.0

H200 SXM

vs

182.0

A100 80GB SXM

Model Compatibility (FP16, Single GPU)

Only on H200 SXM (11)

  • Jamba 1.5 Mini
  • Amazon Nova Pro
  • Falcon 40B
  • Mixtral 8x7B
  • Llama 3.1 Nemotron 51B
  • VILA 1.5 40B
  • Gemini 2.0 Flash
  • Gemini 1.5 Flash
  • Jamba Instruct
  • Phi 3.5 MoE
  • Mixtral 8x7B Instruct

Both (182)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • OctoCoder 15B
  • StarCoder2 15B
  • +162 more

Only on A100 80GB SXM (0)

None

Summary

The H200 SXM (hopper generation) offers 141GB of HBM3e with 990 BF16 TFLOPS and 4,800 GB/s memory bandwidth at 700W TDP.

The A100 80GB SXM (ampere generation) offers 80GB of HBM2e with 312 BF16 TFLOPS and 2,039 GB/s memory bandwidth at 400W TDP.

The H200 SXM has +76% more VRAM, allowing it to run larger models without multi-GPU setups.

From a cost perspective, the A100 80GB SXM is more affordable at $1.19/hr vs $2.69/hr for the H200 SXM.

More GPU Comparisons