Skip to content
Updated minutes ago
DeepSeek

DeepSeek V2 Lite

DeepSeek · moe · 15.7B parameters · 32,768 context

Quality
50.0

Architecture Details

TypeMOE
Total Parameters15.7B
Active Parameters2.4B
Layers27
Hidden Dimension2,048
Attention Heads16
KV Heads16
Head Dimension128
Vocab Size102,400
Total Experts64
Active Experts6

Memory Requirements

BF16 Weights

31.4 GB

FP8 Weights

15.7 GB

INT4 Weights

7.8 GB

KV-Cache per Token221184 bytes
Activation Estimate0.50 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM BF16H100 PCIe BF16H100 NVL BF16

GPU Recommendations

H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
RTX A6000optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

864.0 tok/s

Cost/Month

$465

Cost/M Tokens

$0.20

Use this config →

API Pricing Comparison

No API pricing data available for this model.

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgi

Supported Precisions

BF16 (default)FP8INT4

Similar Models