Updated minutes ago
Cloud AI 100
other · other · 32 GB LPDDR4X · 75W TDP
VRAM
32 GB
BF16 TFLOPS
150
Bandwidth
134 GB/s
From
$0.00/hr
Spec Sheet
VRAM32 GB LPDDR4X
Memory Bandwidth134 GB/s
BF16 TFLOPS150
FP16 TFLOPS150
FP8 TFLOPS300
INT8 TOPS400
TDP75W
InterconnectPCIE
Max per Node8
PCIe Gen4
Tensor CoresNo
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| qualcomm | $0.00/hr | - | - | Cheapest |
Compatible Models (235)
Single GPU (160 models)
Gemma 2 27B27B FP8Gemma 3 27B27B FP8InternVL2 26B26B FP8Mistral Small 24B24B FP8Mistral Small 3.1 24B24B FP8Codestral 22B22B FP8Solar Pro 22B22B FP8GigaChat 20B20B FP8InternLM 20B20B FP8InternLM 2.5 20B19.9B FP8CogVLM2 19B19B FP8DeepSeek MoE 16B16.4B FP8CodeGen2 16B16B FP8DeepSeek V2 Lite15.7B FP8OctoCoder 15B15.5B FP8StarCoder2 15B15.5B FP8Nemotron 15B15B FP8Qwen 2.5 14B14.8B FP8DeepSeek R1 Distill 14B14.8B FP8Phi-414.7B FP8+140 more
Multi-GPU (75 models)
Jamba 1.5 Minix2 FP8Llama 3.1 Nemotron 51Bx2 FP8Amazon Nova Prox2 FP8Mixtral 8x7Bx2 FP8Mixtral 8x7B Instructx2 FP8Phi 3.5 MoEx2 FP8Falcon 40Bx2 FP8VILA 1.5 40Bx2 FP8Aya 23 35Bx2 FP8Command Rx2 FP8Command R (August 2024)x2 FP8Yi 1.5 34Bx2 FP8Code Llama 34Bx2 FP8DeepSeek Coder 33Bx2 FP8Vicuna 33Bx2 FP8+60 more
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 5 GPUs | 1 GPU |
| 13B model | 8 GPUs | 1 GPU |
| 70B model | 42 GPUs | 2 GPUs |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Mistral 7B
0.24 t/s/WFP8
Qwen 2.5 7B
0.24 t/s/WFP8
Llama 3.1 8B
0.22 t/s/WFP8
Llama 3.1 70B
0.03 t/s/WFP8
Qwen 2.5 72B
0.02 t/s/WFP8
Similar GPUs
| GPU | VRAM | BF16 TFLOPS | BW (GB/s) | From |
|---|---|---|---|---|
| Trainium2 | 96 GB | 756 | 3200 | $1.95/hr |
| Groq LPU | 230 GB | 188 | 80000 | $0.00/hr |
| RTX 5090 | 32 GB | 210 | 1792 | $0.89/hr |
| V100 32GB | 32 GB | 28.3 | 900 | $0.19/hr |
| Instinct MI100 | 32 GB | 184.6 | 1229 | $0.40/hr |