Skip to content
Updated minutes ago
google

TPU v4

google · tpu · 32 GB HBM · 175W TDP

VRAM

32 GB

BF16 TFLOPS

275

Bandwidth

1200 GB/s

From

$2.25/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM32 GB HBM
Memory Bandwidth1200 GB/s
BF16 TFLOPS275
FP16 TFLOPS275
FP8 TFLOPS275
INT8 TOPS550
TDP175W
InterconnectPCIE
Max per Node4096
PCIe Gen4
Tensor CoresNo

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
gcp$3.22/hr$2.25/hr-Cheapest

Compatible Models (282)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model5 GPUs1 GPU
13B model8 GPUs1 GPU
70B model42 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.94 t/s/WFP8
Qwen 2.5 7B
0.90 t/s/WFP8
Llama 3.1 8B
0.85 t/s/WFP8
DeepSeek V3
0.19 t/s/WFP8
Llama 3.1 70B
0.10 t/s/WFP8
Qwen 2.5 72B
0.09 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
TPU v6e (Trillium)32 GB4601640$1.75/hr
TPU v5e16 GB200820$0.85/hr
RTX 509032 GB2101792$0.89/hr
V100 32GB32 GB28.3900$0.19/hr
Instinct MI10032 GB184.61229$0.40/hr

Methodology Note

Performance estimates for the TPU v4are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (32 GB HBM available), KV-cache allocation, and activation memory. Throughput predictions use the TPU v4's rated 1200 GB/s memory bandwidth and 275 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (tpu). See our full methodology.

Frequently Asked Questions

How many AI models can run on TPU v4?

The TPU v4 can run 282 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 4096x TPU v4.

What is the TPU v4 inference throughput?

The TPU v4 delivers 275 BF16 TFLOPS and 275 FP8 TFLOPS with 1200 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does TPU v4 cost per hour?

The TPU v4 is available starting from $2.25/hour via gcp. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.