Updated minutes ago
DeepSeek Coder V2 236B
DeepSeek · moe · 236B parameters · 131,072 context
Quality50.0
Architecture Details
TypeMOE
Total Parameters236B
Active Parameters21B
Layers60
Hidden Dimension5,120
Attention Heads128
KV Heads1
Head Dimension128
Vocab Size100,015
Total Experts128
Active Experts6
Memory Requirements
BF16 Weights
472.0 GB
FP8 Weights
236.0 GB
INT4 Weights
118.0 GB
KV-Cache per Token30720 bytes
Activation Estimate3.00 GB
Fits on (single-node)
B200 SXM INT4B100 SXM INT4GB200 NVL72 (per GPU) INT4GB300 NVL72 (per GPU) INT4H200 SXM INT4H100 NVL 94GB (per GPU pair) INT4Instinct MI300X INT4Instinct MI325X INT4
GPU Recommendations
B200 SXMoptimal
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Cost/Month
$8522
Cost/M Tokens
$11.58
B100 SXMoptimal
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Cost/Month
$8541
Cost/M Tokens
$11.61
GB200 NVL72 (per GPU)optimal
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Cost/Month
$12337
Cost/M Tokens
$16.77
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepseek | $0.14 | $0.28 | Cheapest |
| together | $0.90 | $0.90 |
Capabilities
Features
✓ Tool Use✗ Vision✓ Code✓ Math✗ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
vllmsglangtensorrt-llm
Supported Precisions
BF16 (default)FP8INT4