Skip to content
Updated minutes ago
nvidia

T4

nvidia · turing · 16 GB GDDR6 · 70W TDP

VRAM

16 GB

BF16 TFLOPS

65

Bandwidth

300 GB/s

From

$0.12/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM16 GB GDDR6
Memory Bandwidth300 GB/s
BF16 TFLOPS65
FP16 TFLOPS65
FP8 TFLOPS65
INT8 TOPS130
TDP70W
InterconnectPCIE
Max per Node8
PCIe Gen3
CUDA Compute Capability7.5
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
tensordock$0.19/hr-$0.12/hrCheapest
vast_ai$0.25/hr-$0.14/hr
aws$0.53/hr$0.33/hr$0.16/hr
gcp$0.35/hr$0.22/hr-
runpod$0.37/hr-$0.22/hr
azure$0.45/hr$0.28/hr-

Compatible Models (246)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model9 GPUs1 GPU
13B model16 GPUs1 GPU
70B model83 GPUs3 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.59 t/s/WFP8
Qwen 2.5 7B
0.56 t/s/WFP8
Llama 3.1 8B
0.53 t/s/WFP8
Llama 3.1 70B
0.06 t/s/WFP8
Qwen 2.5 72B
0.06 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
A400016 GB76448$0.17/hr
RTX 408016 GB97717$0.32/hr
V100 16GB16 GB28.3900$0.15/hr
RTX 4060 Ti 16GB16 GB44288$0.30/hr
A216 GB18200$0.15/hr

Methodology Note

Performance estimates for the T4are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (16 GB GDDR6 available), KV-cache allocation, and activation memory. Throughput predictions use the T4's rated 300 GB/s memory bandwidth and 65 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (turing). See our full methodology.

Frequently Asked Questions

How many AI models can run on T4?

The T4 can run 246 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x T4.

What is the T4 inference throughput?

The T4 delivers 65 BF16 TFLOPS and 65 FP8 TFLOPS with 300 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does T4 cost per hour?

The T4 is available starting from $0.12/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.