DeepSeek MoE 16B
DeepSeek · moe · 16.4B parameters · 4,096 context
Parameters
16.4B
Context Window
4K tokens
Architecture
MoE
Best GPU
H100 SXM
Intelligence Brief
DeepSeek MoE 16B is a 16.4B parameter Mixture-of-Experts (64 experts, 6 active) model from DeepSeek, featuring Multi-Head Attention (MHA) with 28 layers and 2,048 hidden dimensions. With a 4,096 token context window, it supports code, math. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.
Architecture Details
Memory Requirements
BF16 Weights
32.8 GB
FP8 Weights
16.4 GB
INT4 Weights
8.2 GB
GPU Compatibility Matrix
DeepSeek MoE 16B is compatible with 76% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.65
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.65
BF16 · 1 GPU · vllm
100/100
score
Throughput
740.5 tok/s
Latency (ITL)
1.4ms
Est. TTFT
0ms
Cost/Month
$465
Cost/M Tokens
$0.24
Deployment Options
API Deployment
No API pricing available
Single GPU
H100 SXM
$1794/mo
Min VRAM: 16 GB
Multi-GPU
A10G x2
788.8 tok/s
TP· $569/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (H100 SXM, FP8)
Precision Impact
bf16
32.8 GB
weights/GPU
fp8
16.4 GB
weights/GPU
~1.1K tok/s
int4
8.2 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy DeepSeek MoE 16B
Self-Hosted Infrastructure
Similar Models
CodeGen2 16B
16B params · dense
Quality: 50
DeepSeek V2 Lite
15.7B params · moe
Quality: 50
OctoCoder 15B
15.5B params · dense
Quality: 50
StarCoder2 15B
15.5B params · dense
Quality: 42
from $0.30/M
Nemotron 15B
15B params · dense
Quality: 72
from $0.30/M
Frequently Asked Questions
How much VRAM does DeepSeek MoE 16B need for inference?
DeepSeek MoE 16B requires approximately 32.8 GB of VRAM at BF16 precision, 16.4 GB at FP8, or 8.2 GB at INT4 quantization. Additional VRAM is needed for KV-cache (229376 bytes per token) and activations (~0.50 GB).
What is the best GPU for DeepSeek MoE 16B?
The top recommended GPU for DeepSeek MoE 16B is the H100 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $1794/month ($0.65/M tokens). Score: 100/100.
How much does DeepSeek MoE 16B inference cost?
DeepSeek MoE 16B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.