Updated minutes ago
DeepSeek MoE 16B
DeepSeek · moe · 16.4B parameters · 4,096 context
Quality50.0
Architecture Details
TypeMOE
Total Parameters16.4B
Active Parameters2.8B
Layers28
Hidden Dimension2,048
Attention Heads16
KV Heads16
Head Dimension128
Vocab Size102,400
Total Experts64
Active Experts6
Memory Requirements
BF16 Weights
32.8 GB
FP8 Weights
16.4 GB
INT4 Weights
8.2 GB
KV-Cache per Token229376 bytes
Activation Estimate0.50 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM BF16H100 PCIe BF16H100 NVL BF16
GPU Recommendations
H100 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Cost/Month
$1794
Cost/M Tokens
$0.65
H100 PCIeoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Cost/Month
$1794
Cost/M Tokens
$0.65
RTX A6000optimal
BF16 · 1 GPU · vllm
100/100
score
Throughput
740.5 tok/s
Cost/Month
$465
Cost/M Tokens
$0.24
API Pricing Comparison
No API pricing data available for this model.
Capabilities
Features
✗ Tool Use✗ Vision✓ Code✓ Math✗ Reasoning✗ Multilingual✗ Structured Output
Supported Frameworks
vllmsglangtgi
Supported Precisions
BF16 (default)FP8INT4