Skip to content
Updated minutes ago
DeepSeek

DeepSeek MoE 16B

DeepSeek · moe · 16.4B parameters · 4,096 context

Quality
50.0

Parameters

16.4B

Context Window

4K tokens

Architecture

MoE

Best GPU

H100 SXM

Intelligence Brief

DeepSeek MoE 16B is a 16.4B parameter Mixture-of-Experts (64 experts, 6 active) model from DeepSeek, featuring Multi-Head Attention (MHA) with 28 layers and 2,048 hidden dimensions. With a 4,096 token context window, it supports code, math. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.

Architecture Details

TypeMOE
Total Parameters16.4B
Active Parameters2.8B
Layers28
Hidden Dimension2,048
Attention Heads16
KV Heads16
Head Dimension128
Vocab Size102,400
Total Experts64
Active Experts6

Memory Requirements

BF16 Weights

32.8 GB

FP8 Weights

16.4 GB

INT4 Weights

8.2 GB

KV-Cache per Token229376 bytes
Activation Estimate0.50 GB

GPU Compatibility Matrix

DeepSeek MoE 16B is compatible with 76% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
RTX A6000optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

740.5 tok/s

Latency (ITL)

1.4ms

Est. TTFT

0ms

Cost/Month

$465

Cost/M Tokens

$0.24

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

H100 SXM

$1794/mo

Min VRAM: 16 GB

Scale

Multi-GPU

A10G x2

788.8 tok/s

TP· $569/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

H100 SXM
1.1K tok/s
H100 PCIe
1.1K tok/s
RTX A6000
740.5 tok/s

VRAM Breakdown (H100 SXM, FP8)

Weights
Act
Weights 16.4 GBKV-Cache 1.9 GBActivations 4.0 GBOverhead 0.8 GB

Precision Impact

bf16

32.8 GB

weights/GPU

fp8

16.4 GB

weights/GPU

~1.1K tok/s

int4

8.2 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgi

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy DeepSeek MoE 16B

Similar Models

Larger context, Lower qualityCompare →

Nemotron 15B

15B params · dense

Quality: 72

from $0.30/M

Higher qualityCompare →

Frequently Asked Questions

How much VRAM does DeepSeek MoE 16B need for inference?

DeepSeek MoE 16B requires approximately 32.8 GB of VRAM at BF16 precision, 16.4 GB at FP8, or 8.2 GB at INT4 quantization. Additional VRAM is needed for KV-cache (229376 bytes per token) and activations (~0.50 GB).

What is the best GPU for DeepSeek MoE 16B?

The top recommended GPU for DeepSeek MoE 16B is the H100 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $1794/month ($0.65/M tokens). Score: 100/100.

How much does DeepSeek MoE 16B inference cost?

DeepSeek MoE 16B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.