Skip to content
Updated minutes ago
nvidia

A100 80GB SXM

nvidia · ampere · 80 GB HBM2e · 400W TDP

VRAM

80 GB

BF16 TFLOPS

312

Bandwidth

2039 GB/s

From

$1.19/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM80 GB HBM2e
Memory Bandwidth2039 GB/s
BF16 TFLOPS312
FP16 TFLOPS312
FP8 TFLOPS312
INT8 TOPS624
TDP400W
InterconnectNVLINK
NVLink Bandwidth600 GB/s
Max per Node8
PCIe Gen4
CUDA Compute Capability8
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
fluidstack$1.69/hr-$1.19/hrCheapest
tensordock$1.79/hr-$1.29/hr
vast_ai$1.80/hr-$1.30/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
runpod$2.72/hr-$2.09/hr
aws$3.67/hr$2.39/hr-
azure$3.67/hr$2.45/hr-
gcp$3.67/hr$2.48/hr-

Pricing History

runpod
$2.72/hr 0.0%
2024-01-012025-03-01
Low: $2.72High: $3.89
lambda
$1.49/hr 0.0%
2024-01-012025-03-01
Low: $1.49High: $2.49
coreweave
$2.21/hr 0.0%
2024-01-012025-03-01
Low: $2.21High: $3.20

Compatible Models (278)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model2 GPUs1 GPU
13B model4 GPUs1 GPU
70B model17 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.70 t/s/WFP8
Qwen 2.5 7B
0.67 t/s/WFP8
Llama 3.1 8B
0.63 t/s/WFP8
Llama 3.1 70B
0.07 t/s/WFP8
Qwen 2.5 72B
0.07 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A100 80GB PCIe80 GB3122039$1.05/hr
A1664 GB16.8232$0.72/hr
RTX A600048 GB38.7768$0.49/hr
A4048 GB37.4696$0.42/hr
A100 40GB SXM40 GB3121555$0.85/hr

Methodology Note

Performance estimates for the A100 80GB SXMare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (80 GB HBM2e available), KV-cache allocation, and activation memory. Throughput predictions use the A100 80GB SXM's rated 2039 GB/s memory bandwidth and 312 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (ampere). See our full methodology.

Frequently Asked Questions

How many AI models can run on A100 80GB SXM?

The A100 80GB SXM can run 278 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x A100 80GB SXM.

What is the A100 80GB SXM inference throughput?

The A100 80GB SXM delivers 312 BF16 TFLOPS and 312 FP8 TFLOPS with 2039 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does A100 80GB SXM cost per hour?

The A100 80GB SXM is available starting from $1.19/hour via fluidstack. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.