Updated minutes ago
Instinct MI100
amd · cdna · 32 GB HBM2 · 300W TDP
VRAM
32 GB
BF16 TFLOPS
184.6
Bandwidth
1229 GB/s
From
$0.40/hr
Spec Sheet
VRAM32 GB HBM2
Memory Bandwidth1229 GB/s
BF16 TFLOPS184.6
FP16 TFLOPS184.6
FP8 TFLOPS184.6
INT8 TOPS184.6
TDP300W
InterconnectINFINITY-FABRIC
Max per Node8
PCIe Gen4
Tensor CoresNo
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| vast_ai | $0.60/hr | - | $0.40/hr | Cheapest |
Compatible Models (235)
Single GPU (160 models)
Gemma 2 27B27B FP8Gemma 3 27B27B FP8InternVL2 26B26B FP8Mistral Small 24B24B FP8Mistral Small 3.1 24B24B FP8Codestral 22B22B FP8Solar Pro 22B22B FP8GigaChat 20B20B FP8InternLM 20B20B FP8InternLM 2.5 20B19.9B FP8CogVLM2 19B19B FP8DeepSeek MoE 16B16.4B FP8CodeGen2 16B16B FP8DeepSeek V2 Lite15.7B FP8OctoCoder 15B15.5B FP8StarCoder2 15B15.5B FP8Nemotron 15B15B FP8Qwen 2.5 14B14.8B FP8DeepSeek R1 Distill 14B14.8B FP8Phi-414.7B FP8+140 more
Multi-GPU (75 models)
Jamba 1.5 Minix2 FP8Llama 3.1 Nemotron 51Bx2 FP8Amazon Nova Prox2 FP8Mixtral 8x7Bx2 FP8Mixtral 8x7B Instructx2 FP8Phi 3.5 MoEx2 FP8Falcon 40Bx2 FP8VILA 1.5 40Bx2 FP8Aya 23 35Bx2 FP8Command Rx2 FP8Command R (August 2024)x2 FP8Yi 1.5 34Bx2 FP8Code Llama 34Bx2 FP8DeepSeek Coder 33Bx2 FP8Vicuna 33Bx2 FP8+60 more
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 5 GPUs | 1 GPU |
| 13B model | 8 GPUs | 1 GPU |
| 70B model | 42 GPUs | 2 GPUs |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Mistral 7B
0.56 t/s/WFP8
Qwen 2.5 7B
0.54 t/s/WFP8
Llama 3.1 8B
0.51 t/s/WFP8
Llama 3.1 70B
0.06 t/s/WFP8
Qwen 2.5 72B
0.06 t/s/WFP8
Similar GPUs
| GPU | VRAM | BF16 TFLOPS | BW (GB/s) | From |
|---|---|---|---|---|
| RTX 5090 | 32 GB | 210 | 1792 | $0.89/hr |
| V100 32GB | 32 GB | 28.3 | 900 | $0.19/hr |
| TPU v4 | 32 GB | 275 | 1200 | $2.25/hr |
| TPU v6e (Trillium) | 32 GB | 460 | 1640 | $1.75/hr |
| Cloud AI 100 | 32 GB | 150 | 134 | $0.00/hr |