Skip to content
Updated minutes ago
DeepSeek

DeepSeek Coder V2 236B

DeepSeek · moe · 236B parameters · 131,072 context

Quality
50.0

Parameters

236B

Context Window

128K tokens

Architecture

MoE

Best GPU

B200 SXM

Cheapest API

$0.28/M

Intelligence Brief

DeepSeek Coder V2 236B is a 236B parameter Mixture-of-Experts (128 experts, 6 active) model from DeepSeek, featuring Grouped Query Attention (GQA) with 60 layers and 5,120 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. The most cost-effective API deployment is via deepseek at $0.28/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $8522/month.

Architecture Details

TypeMOE
Total Parameters236B
Active Parameters21B
Layers60
Hidden Dimension5,120
Attention Heads128
KV Heads1
Head Dimension128
Vocab Size100,015
Total Experts128
Active Experts6

Memory Requirements

BF16 Weights

472.0 GB

FP8 Weights

236.0 GB

INT4 Weights

118.0 GB

KV-Cache per Token30720 bytes
Activation Estimate3.00 GB

Fits on (multi-GPU with Tensor Parallelism)

Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.

GPU Compatibility Matrix

DeepSeek Coder V2 236B is compatible with 8% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$8522

Cost/M Tokens

$11.58

Use this config →
B100 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$8541

Cost/M Tokens

$11.61

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$12337

Cost/M Tokens

$16.77

Use this config →

Deployment Options

API

API Deployment

deepseek

$0.28/M

output tokens

Self-Hosted

Single GPU

B200 NVL (pair)

$9965/mo

Min VRAM: 236 GB

Scale

Multi-GPU

B200 SXM x2

280.0 tok/s

TP· $8522/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
deepseek$0.14$0.28
Cheapest
together$0.90$0.90

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
deepseekBest Value$0.14$0.28$2
together$0.90$0.90$9

Cost per 1,000 Requests

Short (500 tok)

$0.13

via deepseek

Medium (2K tok)

$0.50

via deepseek

Long (8K tok)

$1.68

via deepseek

Performance Estimates

Throughput by GPU

B200 SXM
280.0 tok/s
B100 SXM
280.0 tok/s
GB200 NVL72 (per GPU)
280.0 tok/s

VRAM Breakdown (B200 SXM, FP8)

Weights
Act
Weights 118.0 GBKV-Cache 0.3 GBActivations 24.0 GBOverhead 5.9 GB

Precision Impact

bf16

236.0 GB

weights/GPU

fp8

118.0 GB

weights/GPU

~280.0 tok/s

int4

59.0 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy DeepSeek Coder V2 236B

Similar Models

DeepSeek V2.5

236B params · moe

Quality: 78

from $0.28/M

Higher qualityCompare →

Qwen 3 235B

235B params · moe

Quality: 83

from $3.00/M

Higher quality, More expensiveCompare →
Higher quality, More expensiveCompare →

Claude Opus 4

200B params · dense

Quality: 90

from $75.00/M

Larger context, Higher quality, More expensiveCompare →

GPT-4o

200B params · moe

Quality: 85

from $10.00/M

Higher quality, More expensiveCompare →

Frequently Asked Questions

How much VRAM does DeepSeek Coder V2 236B need for inference?

DeepSeek Coder V2 236B requires approximately 472.0 GB of VRAM at BF16 precision, 236.0 GB at FP8, or 118.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (30720 bytes per token) and activations (~3.00 GB).

What is the best GPU for DeepSeek Coder V2 236B?

The top recommended GPU for DeepSeek Coder V2 236B is the B200 SXM (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $8522/month ($11.58/M tokens). Score: 100/100.

How much does DeepSeek Coder V2 236B inference cost?

DeepSeek Coder V2 236B API inference starts from $0.14/M input tokens and $0.28/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.