Skip to content
Updated minutes ago
nvidia

A10G

nvidia · ampere · 24 GB GDDR6 · 150W TDP

VRAM

24 GB

BF16 TFLOPS

35

Bandwidth

600 GB/s

From

$0.30/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM24 GB GDDR6
Memory Bandwidth600 GB/s
BF16 TFLOPS35
FP16 TFLOPS35
FP8 TFLOPS35
INT8 TOPS70
TDP150W
InterconnectPCIE
Max per Node8
PCIe Gen4
CUDA Compute Capability8.6
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
vast_ai$0.50/hr-$0.30/hrCheapest
aws$1.21/hr$0.75/hr$0.46/hr
runpod$0.69/hr--

Pricing History

aws
$1.52/hr 0.0%
2024-01-012025-03-01
Low: $1.52High: $1.52

Compatible Models (255)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model6 GPUs1 GPU
13B model11 GPUs1 GPU
70B model55 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.55 t/s/WFP8
Qwen 2.5 7B
0.53 t/s/WFP8
Llama 3.1 8B
0.50 t/s/WFP8
Llama 3.1 70B
0.06 t/s/WFP8
Qwen 2.5 72B
0.06 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A3024 GB165933$0.35/hr
RTX 309024 GB35.6936$0.19/hr
RTX A500024 GB27.8768$0.25/hr
A400016 GB76448$0.17/hr
A216 GB18200$0.15/hr

Methodology Note

Performance estimates for the A10Gare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (24 GB GDDR6 available), KV-cache allocation, and activation memory. Throughput predictions use the A10G's rated 600 GB/s memory bandwidth and 35 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (ampere). See our full methodology.

Frequently Asked Questions

How many AI models can run on A10G?

The A10G can run 255 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x A10G.

What is the A10G inference throughput?

The A10G delivers 35 BF16 TFLOPS and 35 FP8 TFLOPS with 600 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does A10G cost per hour?

The A10G is available starting from $0.30/hour via vast_ai. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.