Skip to content
Updated minutes ago
nvidia

A100 80GB SXM

nvidia · ampere · 80 GB HBM2e · 400W TDP

VRAM

80 GB

BF16 TFLOPS

312

Bandwidth

2039 GB/s

From

$1.19/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM80 GB HBM2e
Memory Bandwidth2039 GB/s
BF16 TFLOPS312
FP16 TFLOPS312
FP8 TFLOPS312
INT8 TOPS624
TDP400W
InterconnectNVLINK
NVLink Bandwidth600 GB/s
Max per Node8
PCIe Gen4
CUDA Compute Capability8
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
fluidstack$1.69/hr-$1.19/hrCheapest
tensordock$1.79/hr-$1.29/hr
vast_ai$1.80/hr-$1.30/hr
lambda$1.99/hr$1.49/hr-
coreweave$2.21/hr$1.62/hr-
runpod$2.72/hr-$2.09/hr
aws$3.67/hr$2.39/hr-
azure$3.67/hr$2.45/hr-
gcp$3.67/hr$2.48/hr-

Pricing History

runpod
$2.72/hr 0.0%
2024-01-012025-03-01
Low: $2.72High: $3.89
lambda
$1.49/hr 0.0%
2024-01-012025-03-01
Low: $1.49High: $2.49
coreweave
$2.21/hr 0.0%
2024-01-012025-03-01
Low: $2.21High: $3.20

Compatible Models (249)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model2 GPUs1 GPU
13B model4 GPUs1 GPU
70B model17 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.70 t/s/WFP8
Qwen 2.5 7B
0.67 t/s/WFP8
Llama 3.1 8B
0.63 t/s/WFP8
Llama 3.1 70B
0.07 t/s/WFP8
Qwen 2.5 72B
0.07 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A100 80GB PCIe80 GB3122039$1.05/hr
A1664 GB16.8232$0.72/hr
RTX A600048 GB38.7768$0.49/hr
A4048 GB37.4696$0.42/hr
A100 40GB SXM40 GB3121555$0.85/hr