Skip to content

H100 SXM vs A100 80GB SXM

Side-by-side comparison of the NVIDIA H100 SXM and the NVIDIA A100 80GB SXM for AI inference workloads.

Specifications

SpecH100 SXMA100 80GB SXM
Generationhopperampere
Memory TypeHBM3HBM2e
VRAM80 GB80 GB
Memory Bandwidth3,350 GB/s2,039 GB/s
BF16 TFLOPS990312
FP16 TFLOPS990312
FP8 TFLOPS1,979312
INT8 TOPS1,979624
TDP700 W400 W
Interconnectnvlinknvlink
NVLink Bandwidth900 GB/s600 GB/s
Max GPUs per Node88
PCIe GenGen 5Gen 4
CUDA Compute Capability98

Pricing

H100 SXM

ProviderOn-DemandReservedSpot
runpod$4.18/hr-$3.29/hr
lambda$2.49/hr$1.89/hr-
coreweave$3.79/hr$2.57/hr-
aws$5.12/hr$3.59/hr-
gcp$4.85/hr$3.40/hr-
azure$4.98/hr$3.49/hr-
vast ai$3.40/hr-$2.50/hr
tensordock$3.29/hr-$2.49/hr
fluidstack$2.85/hr-$2.10/hr

A100 80GB SXM

ProviderOn-DemandReservedSpot
runpod$2.72/hr-$2.09/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
aws$3.67/hr$2.39/hr-
gcp$3.67/hr$2.48/hr-
azure$3.67/hr$2.45/hr-
vast ai$1.80/hr-$1.30/hr
tensordock$1.79/hr-$1.29/hr
fluidstack$1.69/hr-$1.19/hr

Cheapest available rate: H100 SXM at $1.89/hr vs A100 80GB SXM at $1.19/hrA100 80GB SXM is +59% cheaper

Efficiency Metrics

TFLOPS / Watt

1.4

H100 SXM

vs

0.8

A100 80GB SXM

BF16

VRAM / Dollar

42.3

H100 SXM

vs

67.2

A100 80GB SXM

GB/$/hr

Bandwidth / Watt

4.8

H100 SXM

vs

5.1

A100 80GB SXM

GB/s/W

Models (FP16, 1 GPU)

182.0

H100 SXM

vs

182.0

A100 80GB SXM

Model Compatibility (FP16, Single GPU)

Only on H100 SXM (0)

None

Both (182)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • OctoCoder 15B
  • StarCoder2 15B
  • +162 more

Only on A100 80GB SXM (0)

None

Summary

The H100 SXM (hopper generation) offers 80GB of HBM3 with 990 BF16 TFLOPS and 3,350 GB/s memory bandwidth at 700W TDP.

The A100 80GB SXM (ampere generation) offers 80GB of HBM2e with 312 BF16 TFLOPS and 2,039 GB/s memory bandwidth at 400W TDP.

From a cost perspective, the A100 80GB SXM is more affordable at $1.19/hr vs $1.89/hr for the H100 SXM.

More GPU Comparisons