Skip to content

A100 80GB SXM vs A100 80GB PCIe

Side-by-side comparison of the NVIDIA A100 80GB SXM and the NVIDIA A100 80GB PCIe for AI inference workloads.

Specifications

SpecA100 80GB SXMA100 80GB PCIe
Generationampereampere
Memory TypeHBM2eHBM2e
VRAM80 GB80 GB
Memory Bandwidth2,039 GB/s2,039 GB/s
BF16 TFLOPS312312
FP16 TFLOPS312312
FP8 TFLOPS312312
INT8 TOPS624624
TDP400 W300 W
Interconnectnvlinkpcie
NVLink Bandwidth600 GB/sN/A
Max GPUs per Node88
PCIe GenGen 4Gen 4
CUDA Compute Capability88

Pricing

A100 80GB SXM

ProviderOn-DemandReservedSpot
runpod$2.72/hr-$2.09/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
aws$3.67/hr$2.39/hr-
gcp$3.67/hr$2.48/hr-
azure$3.67/hr$2.45/hr-
vast ai$1.80/hr-$1.30/hr
tensordock$1.79/hr-$1.29/hr
fluidstack$1.69/hr-$1.19/hr

A100 80GB PCIe

ProviderOn-DemandReservedSpot
runpod$2.29/hr-$1.79/hr
lambda$1.79/hr--
vast ai$1.60/hr-$1.10/hr
tensordock$1.59/hr-$1.09/hr
fluidstack$1.49/hr-$1.05/hr

Cheapest available rate: A100 80GB SXM at $1.19/hr vs A100 80GB PCIe at $1.05/hrA100 80GB PCIe is +13% cheaper

Efficiency Metrics

TFLOPS / Watt

0.8

A100 80GB SXM

vs

1.0

A100 80GB PCIe

BF16

VRAM / Dollar

67.2

A100 80GB SXM

vs

76.2

A100 80GB PCIe

GB/$/hr

Bandwidth / Watt

5.1

A100 80GB SXM

vs

6.8

A100 80GB PCIe

GB/s/W

Models (FP16, 1 GPU)

182.0

A100 80GB SXM

vs

182.0

A100 80GB PCIe

Model Compatibility (FP16, Single GPU)

Only on A100 80GB SXM (0)

None

Both (182)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • OctoCoder 15B
  • StarCoder2 15B
  • +162 more

Only on A100 80GB PCIe (0)

None

Summary

The A100 80GB SXM (ampere generation) offers 80GB of HBM2e with 312 BF16 TFLOPS and 2,039 GB/s memory bandwidth at 400W TDP.

The A100 80GB PCIe (ampere generation) offers 80GB of HBM2e with 312 BF16 TFLOPS and 2,039 GB/s memory bandwidth at 300W TDP.

From a cost perspective, the A100 80GB PCIe is more affordable at $1.05/hr vs $1.19/hr for the A100 80GB SXM.

More GPU Comparisons