T4
nvidia · turing · 16 GB GDDR6 · 70W TDP
VRAM
16 GB
BF16 TFLOPS
65
Bandwidth
300 GB/s
From
$0.12/hr
Spec Sheet
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| tensordock | $0.19/hr | - | $0.12/hr | Cheapest |
| vast_ai | $0.25/hr | - | $0.14/hr | |
| aws | $0.53/hr | $0.33/hr | $0.16/hr | |
| gcp | $0.35/hr | $0.22/hr | - | |
| runpod | $0.37/hr | - | $0.22/hr | |
| azure | $0.45/hr | $0.28/hr | - |
Compatible Models (246)
Single GPU (150 models)
Multi-GPU (96 models)
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 9 GPUs | 1 GPU |
| 13B model | 16 GPUs | 1 GPU |
| 70B model | 83 GPUs | 3 GPUs |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Similar GPUs
| GPU | VRAM | BF16 TFLOPS | BW (GB/s) | From |
|---|---|---|---|---|
| A4000 | 16 GB | 76 | 448 | $0.17/hr |
| RTX 4080 | 16 GB | 97 | 717 | $0.32/hr |
| V100 16GB | 16 GB | 28.3 | 900 | $0.15/hr |
| RTX 4060 Ti 16GB | 16 GB | 44 | 288 | $0.30/hr |
| A2 | 16 GB | 18 | 200 | $0.15/hr |
Methodology Note
Performance estimates for the T4are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (16 GB GDDR6 available), KV-cache allocation, and activation memory. Throughput predictions use the T4's rated 300 GB/s memory bandwidth and 65 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (turing). See our full methodology.
Frequently Asked Questions
How many AI models can run on T4?
The T4 can run 246 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x T4.
What is the T4 inference throughput?
The T4 delivers 65 BF16 TFLOPS and 65 FP8 TFLOPS with 300 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.
How much does T4 cost per hour?
The T4 is available starting from $0.12/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.