Skip to content
Updated minutes ago
nvidia

A100 40GB PCIe

nvidia · ampere · 40 GB HBM2e · 250W TDP

VRAM

40 GB

BF16 TFLOPS

312

Bandwidth

1555 GB/s

From

$0.69/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM40 GB HBM2e
Memory Bandwidth1555 GB/s
BF16 TFLOPS312
FP16 TFLOPS312
FP8 TFLOPS312
INT8 TOPS624
TDP250W
InterconnectPCIE
Max per Node8
PCIe Gen4
CUDA Compute Capability8
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
tensordock$0.99/hr-$0.69/hrCheapest
vast_ai$1.10/hr-$0.75/hr
runpod$1.44/hr-$1.09/hr
lambda$1.10/hr--

Compatible Models (265)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model4 GPUs1 GPU
13B model7 GPUs1 GPU
70B model33 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.85 t/s/WFP8
Qwen 2.5 7B
0.82 t/s/WFP8
Llama 3.1 8B
0.77 t/s/WFP8
Llama 3.1 70B
0.09 t/s/WFP8
Qwen 2.5 72B
0.09 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A100 40GB SXM40 GB3121555$0.85/hr
RTX A600048 GB38.7768$0.49/hr
A4048 GB37.4696$0.42/hr
A10G24 GB35600$0.30/hr
A3024 GB165933$0.35/hr

Methodology Note

Performance estimates for the A100 40GB PCIeare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (40 GB HBM2e available), KV-cache allocation, and activation memory. Throughput predictions use the A100 40GB PCIe's rated 1555 GB/s memory bandwidth and 312 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (ampere). See our full methodology.

Frequently Asked Questions

How many AI models can run on A100 40GB PCIe?

The A100 40GB PCIe can run 265 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x A100 40GB PCIe.

What is the A100 40GB PCIe inference throughput?

The A100 40GB PCIe delivers 312 BF16 TFLOPS and 312 FP8 TFLOPS with 1555 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does A100 40GB PCIe cost per hour?

The A100 40GB PCIe is available starting from $0.69/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.