Skip to content
Updated minutes ago
DeepSeek

DeepSeek V2.5

DeepSeek · moe · 236B parameters · 131,072 context

Quality
78.0

Architecture Details

TypeMOE
Total Parameters236B
Active Parameters21B
Layers60
Hidden Dimension5,120
Attention Heads128
KV Heads1
Head Dimension128
Vocab Size100,015
Total Experts128
Active Experts6

Memory Requirements

BF16 Weights

472.0 GB

FP8 Weights

236.0 GB

INT4 Weights

118.0 GB

KV-Cache per Token30720 bytes
Activation Estimate3.00 GB

Fits on (single-node)

B200 SXM INT4B100 SXM INT4GB200 NVL72 (per GPU) INT4GB300 NVL72 (per GPU) INT4H200 SXM INT4H100 NVL 94GB (per GPU pair) INT4Instinct MI300X INT4Instinct MI325X INT4

GPU Recommendations

B200 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$8522

Cost/M Tokens

$11.58

Use this config →
B100 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$8541

Cost/M Tokens

$11.61

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$12337

Cost/M Tokens

$16.77

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
deepseek$0.14$0.28
Cheapest
together$0.80$0.80

Quality Benchmarks

MMLU
80.4
HumanEval
55.0
GSM8K
85.0
MT-Bench
83.0

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Similar Models