Skip to content
Updated minutes ago
nvidia

A30

nvidia · ampere · 24 GB HBM2 · 165W TDP

VRAM

24 GB

BF16 TFLOPS

165

Bandwidth

933 GB/s

From

$0.35/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM24 GB HBM2
Memory Bandwidth933 GB/s
BF16 TFLOPS165
FP16 TFLOPS165
FP8 TFLOPS165
INT8 TOPS330
TDP165W
InterconnectPCIE
Max per Node8
PCIe Gen4
CUDA Compute Capability8
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
vast_ai$0.50/hr-$0.35/hrCheapest
tensordock$0.49/hr-$0.35/hr
runpod$0.65/hr--

Compatible Models (255)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model6 GPUs1 GPU
13B model11 GPUs1 GPU
70B model55 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.77 t/s/WFP8
Qwen 2.5 7B
0.74 t/s/WFP8
Llama 3.1 8B
0.70 t/s/WFP8
Llama 3.1 70B
0.08 t/s/WFP8
Qwen 2.5 72B
0.08 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A10G24 GB35600$0.30/hr
RTX 309024 GB35.6936$0.19/hr
RTX A500024 GB27.8768$0.25/hr
A400016 GB76448$0.17/hr
A216 GB18200$0.15/hr

Methodology Note

Performance estimates for the A30are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (24 GB HBM2 available), KV-cache allocation, and activation memory. Throughput predictions use the A30's rated 933 GB/s memory bandwidth and 165 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (ampere). See our full methodology.

Frequently Asked Questions

How many AI models can run on A30?

The A30 can run 255 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x A30.

What is the A30 inference throughput?

The A30 delivers 165 BF16 TFLOPS and 165 FP8 TFLOPS with 933 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does A30 cost per hour?

The A30 is available starting from $0.35/hour via vast_ai. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.