Skip to content
Updated minutes ago
nvidia

V100 32GB

nvidia · volta · 32 GB HBM2 · 300W TDP

VRAM

32 GB

BF16 TFLOPS

28.3

Bandwidth

900 GB/s

From

$0.19/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM32 GB HBM2
Memory Bandwidth900 GB/s
BF16 TFLOPS28.3
FP16 TFLOPS28.3
FP8 TFLOPS28.3
INT8 TOPS56.5
TDP300W
InterconnectNVLINK
NVLink Bandwidth300 GB/s
Max per Node8
PCIe Gen3
CUDA Compute Capability7
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
vast_ai$0.35/hr-$0.19/hrCheapest
tensordock$0.29/hr-$0.19/hr
runpod$0.49/hr-$0.29/hr
aws$3.06/hr$1.96/hr$0.92/hr

Compatible Models (260)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model5 GPUs1 GPU
13B model8 GPUs1 GPU
70B model42 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.41 t/s/WFP8
Qwen 2.5 7B
0.39 t/s/WFP8
Llama 3.1 8B
0.37 t/s/WFP8
Llama 3.1 70B
0.04 t/s/WFP8
Qwen 2.5 72B
0.04 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
V100 16GB16 GB28.3900$0.15/hr
RTX 509032 GB2101792$0.89/hr
Instinct MI10032 GB184.61229$0.40/hr
TPU v432 GB2751200$2.25/hr
TPU v6e (Trillium)32 GB4601640$1.75/hr

Methodology Note

Performance estimates for the V100 32GBare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (32 GB HBM2 available), KV-cache allocation, and activation memory. Throughput predictions use the V100 32GB's rated 900 GB/s memory bandwidth and 28.3 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (volta). See our full methodology.

Frequently Asked Questions

How many AI models can run on V100 32GB?

The V100 32GB can run 260 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x V100 32GB.

What is the V100 32GB inference throughput?

The V100 32GB delivers 28.3 BF16 TFLOPS and 28.3 FP8 TFLOPS with 900 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does V100 32GB cost per hour?

The V100 32GB is available starting from $0.19/hour via vast_ai. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.