Updated minutes ago
RX 7900 XTX
amd · rdna3 · 24 GB GDDR6 · 355W TDP
VRAM
24 GB
BF16 TFLOPS
123
Bandwidth
960 GB/s
From
$0.20/hr
Spec Sheet
VRAM24 GB GDDR6
Memory Bandwidth960 GB/s
BF16 TFLOPS123
FP16 TFLOPS123
FP8 TFLOPS123
INT8 TOPS123
TDP355W
InterconnectPCIE
Max per Node4
PCIe Gen4
Tensor CoresNo
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| vast_ai | $0.35/hr | - | $0.20/hr | Cheapest |
Compatible Models (218)
Single GPU (153 models)
GigaChat 20B20B FP8InternLM 20B20B FP8InternLM 2.5 20B19.9B FP8CogVLM2 19B19B FP8DeepSeek MoE 16B16.4B FP8CodeGen2 16B16B FP8DeepSeek V2 Lite15.7B FP8OctoCoder 15B15.5B FP8StarCoder2 15B15.5B FP8Nemotron 15B15B FP8Qwen 2.5 14B14.8B FP8DeepSeek R1 Distill 14B14.8B FP8Phi-414.7B FP8Qwen 2.5 Coder 14B14.7B FP8Qwen 1.5 MoE A2.7B14.3B FP8RWKV-6 14B14.1B FP8Phi 3 Medium 14B14B FP8Nekomata 14B14B FP8OLMo 2 13B13B FP8Baichuan 2 13B13B FP8+133 more
Multi-GPU (65 models)
Falcon 40Bx2 FP8VILA 1.5 40Bx2 FP8Aya 23 35Bx2 FP8Command Rx2 FP8Command R (August 2024)x2 FP8Yi 1.5 34Bx2 FP8Code Llama 34Bx2 FP8DeepSeek Coder 33Bx2 FP8Vicuna 33Bx2 FP8WizardCoder 33Bx2 FP8DeepSeek R1 Distill 32Bx2 FP8Qwen 3 32Bx2 FP8Qwen 2.5 32Bx2 FP8Qwen 2.5 Coder 32Bx2 FP8Qwen 3 30B-A3Bx2 FP8+50 more
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 6 GPUs | 1 GPU |
| 13B model | 11 GPUs | 1 GPU |
| 70B model | 55 GPUs | 2 GPUs |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Mistral 7B
0.37 t/s/WFP8
Qwen 2.5 7B
0.36 t/s/WFP8
Llama 3.1 8B
0.34 t/s/WFP8
Llama 3.1 70B
0.04 t/s/WFP8
Qwen 2.5 72B
0.04 t/s/WFP8
Similar GPUs
| GPU | VRAM | BF16 TFLOPS | BW (GB/s) | From |
|---|---|---|---|---|
| Radeon PRO W7900 | 48 GB | 122 | 864 | $0.85/hr |
| A10G | 24 GB | 35 | 600 | $0.30/hr |
| A30 | 24 GB | 165 | 933 | $0.35/hr |
| L4 | 24 GB | 121 | 300 | $0.29/hr |
| RTX 4090 | 24 GB | 165 | 1008 | $0.39/hr |