Updated minutes ago
Qwen 2.5 VL 72B
Alibaba · dense · 72.7B parameters · 131,072 context
Quality50.0
Architecture Details
TypeDENSE
Total Parameters72.7B
Active Parameters72.7B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size152,064
Memory Requirements
BF16 Weights
145.4 GB
FP8 Weights
72.7 GB
INT4 Weights
36.4 GB
KV-Cache per Token327680 bytes
Activation Estimate3.00 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM FP8H100 SXM INT4H100 PCIe INT4H100 NVL FP8
GPU Recommendations
H200 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
552.3 tok/s
Cost/Month
$2553
Cost/M Tokens
$1.76
B200 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
98/100
score
Throughput
560.0 tok/s
Cost/Month
$4261
Cost/M Tokens
$2.90
B100 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
98/100
score
Throughput
560.0 tok/s
Cost/Month
$4271
Cost/M Tokens
$2.90
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $0.90 | $0.90 | Cheapest |
Capabilities
Features
✓ Tool Use✓ Vision✓ Code✓ Math✗ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
vllmsglangtgitensorrt-llm
Supported Precisions
BF16 (default)FP8INT4