Updated minutes ago
TPU v6e (Trillium)
google · tpu · 32 GB HBM · 200W TDP
VRAM
32 GB
BF16 TFLOPS
460
Bandwidth
1640 GB/s
From
$1.75/hr
Spec Sheet
VRAM32 GB HBM
Memory Bandwidth1640 GB/s
BF16 TFLOPS460
FP16 TFLOPS460
FP8 TFLOPS920
INT8 TOPS920
TDP200W
InterconnectPCIE
Max per Node256
PCIe Gen5
Tensor CoresNo
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| gcp | $2.50/hr | $1.75/hr | - | Cheapest |
Compatible Models (253)
Single GPU (160 models)
Gemma 2 27B27B FP8Gemma 3 27B27B FP8InternVL2 26B26B FP8Mistral Small 24B24B FP8Mistral Small 3.1 24B24B FP8Codestral 22B22B FP8Solar Pro 22B22B FP8GigaChat 20B20B FP8InternLM 20B20B FP8InternLM 2.5 20B19.9B FP8CogVLM2 19B19B FP8DeepSeek MoE 16B16.4B FP8CodeGen2 16B16B FP8DeepSeek V2 Lite15.7B FP8OctoCoder 15B15.5B FP8StarCoder2 15B15.5B FP8Nemotron 15B15B FP8Qwen 2.5 14B14.8B FP8DeepSeek R1 Distill 14B14.8B FP8Phi-414.7B FP8+140 more
Multi-GPU (93 models)
Jamba 1.5 Minix2 FP8Llama 3.1 Nemotron 51Bx2 FP8Amazon Nova Prox2 FP8Mixtral 8x7Bx2 FP8Mixtral 8x7B Instructx2 FP8Phi 3.5 MoEx2 FP8Falcon 40Bx2 FP8VILA 1.5 40Bx2 FP8Aya 23 35Bx2 FP8Command Rx2 FP8Command R (August 2024)x2 FP8Yi 1.5 34Bx2 FP8Code Llama 34Bx2 FP8DeepSeek Coder 33Bx2 FP8Vicuna 33Bx2 FP8+78 more
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 5 GPUs | 1 GPU |
| 13B model | 8 GPUs | 1 GPU |
| 70B model | 42 GPUs | 2 GPUs |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Mistral 7B
1.12 t/s/WFP8
Qwen 2.5 7B
1.08 t/s/WFP8
Llama 3.1 8B
1.02 t/s/WFP8
DeepSeek V3
0.22 t/s/WFP8
Llama 3.1 70B
0.12 t/s/WFP8
Qwen 2.5 72B
0.11 t/s/WFP8
Similar GPUs
| GPU | VRAM | BF16 TFLOPS | BW (GB/s) | From |
|---|---|---|---|---|
| TPU v4 | 32 GB | 275 | 1200 | $2.25/hr |
| TPU v5e | 16 GB | 200 | 820 | $0.85/hr |
| RTX 5090 | 32 GB | 210 | 1792 | $0.89/hr |
| V100 32GB | 32 GB | 28.3 | 900 | $0.19/hr |
| Instinct MI100 | 32 GB | 184.6 | 1229 | $0.40/hr |