Skip to content
Updated minutes ago
nvidia

V100 32GB

nvidia · volta · 32 GB HBM2 · 300W TDP

VRAM

32 GB

BF16 TFLOPS

28.3

Bandwidth

900 GB/s

From

$0.19/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM32 GB HBM2
Memory Bandwidth900 GB/s
BF16 TFLOPS28.3
FP16 TFLOPS28.3
FP8 TFLOPS28.3
INT8 TOPS56.5
TDP300W
InterconnectNVLINK
NVLink Bandwidth300 GB/s
Max per Node8
PCIe Gen3
CUDA Compute Capability7
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
vast_ai$0.35/hr-$0.19/hrCheapest
tensordock$0.29/hr-$0.19/hr
runpod$0.49/hr-$0.29/hr
aws$3.06/hr$1.96/hr$0.92/hr

Compatible Models (235)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model5 GPUs1 GPU
13B model8 GPUs1 GPU
70B model42 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.41 t/s/WFP8
Qwen 2.5 7B
0.39 t/s/WFP8
Llama 3.1 8B
0.37 t/s/WFP8
Llama 3.1 70B
0.04 t/s/WFP8
Qwen 2.5 72B
0.04 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
V100 16GB16 GB28.3900$0.15/hr
RTX 509032 GB2101792$0.89/hr
Instinct MI10032 GB184.61229$0.40/hr
TPU v432 GB2751200$2.25/hr
TPU v6e (Trillium)32 GB4601640$1.75/hr