Skip to content
Updated minutes ago
Alibaba

Qwen 2.5 72B

Alibaba · dense · 72.7B parameters · 131,072 context

Quality
84.0

Architecture Details

TypeDENSE
Total Parameters72.7B
Active Parameters72.7B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size152,064

Memory Requirements

BF16 Weights

145.4 GB

FP8 Weights

72.7 GB

INT4 Weights

36.4 GB

KV-Cache per Token327680 bytes
Activation Estimate2.50 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM FP8H100 SXM INT4H100 PCIe INT4H100 NVL FP8

GPU Recommendations

H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

552.3 tok/s

Cost/Month

$2553

Cost/M Tokens

$1.76

Use this config →
B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

98/100

score

Throughput

560.0 tok/s

Cost/Month

$4261

Cost/M Tokens

$2.90

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

98/100

score

Throughput

560.0 tok/s

Cost/Month

$4271

Cost/M Tokens

$2.90

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$0.90$0.90
Cheapest
fireworks$0.90$0.90

Quality Benchmarks

MMLU
85.3
HumanEval
56.0
GSM8K
91.6
MT-Bench
86.0

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Similar Models