DeepSeek Coder V2 236B
DeepSeek · moe · 236B parameters · 131,072 context
Parameters
236B
Context Window
128K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$0.28/M
Intelligence Brief
DeepSeek Coder V2 236B is a 236B parameter Mixture-of-Experts (128 experts, 6 active) model from DeepSeek, featuring Grouped Query Attention (GQA) with 60 layers and 5,120 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. The most cost-effective API deployment is via deepseek at $0.28/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $8522/month.
Architecture Details
Memory Requirements
BF16 Weights
472.0 GB
FP8 Weights
236.0 GB
INT4 Weights
118.0 GB
Fits on (single GPU) — most practical first
Fits on (multi-GPU with Tensor Parallelism)
Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.
GPU Compatibility Matrix
DeepSeek Coder V2 236B is compatible with 8% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8522
Cost/M Tokens
$11.58
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8541
Cost/M Tokens
$11.61
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$12337
Cost/M Tokens
$16.77
Deployment Options
API Deployment
deepseek
$0.28/M
output tokens
Single GPU
B200 NVL (pair)
$9965/mo
Min VRAM: 236 GB
Multi-GPU
B200 SXM x2
280.0 tok/s
TP· $8522/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepseek | $0.14 | $0.28 | Cheapest |
| together | $0.90 | $0.90 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| deepseekBest Value | $0.14 | $0.28 | $2 |
| together | $0.90 | $0.90 | $9 |
Cost per 1,000 Requests
Short (500 tok)
$0.13
via deepseek
Medium (2K tok)
$0.50
via deepseek
Long (8K tok)
$1.68
via deepseek
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
236.0 GB
weights/GPU
fp8
118.0 GB
weights/GPU
~280.0 tok/s
int4
59.0 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy DeepSeek Coder V2 236B
Self-Hosted Infrastructure
Similar Models
DeepSeek V2.5
236B params · moe
Quality: 78
from $0.28/M
Qwen 3 235B
235B params · moe
Quality: 83
from $3.00/M
Nemotron Ultra 253B
253B params · dense
Quality: 86
from $6.00/M
Claude Opus 4
200B params · dense
Quality: 90
from $75.00/M
GPT-4o
200B params · moe
Quality: 85
from $10.00/M
Frequently Asked Questions
How much VRAM does DeepSeek Coder V2 236B need for inference?
DeepSeek Coder V2 236B requires approximately 472.0 GB of VRAM at BF16 precision, 236.0 GB at FP8, or 118.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (30720 bytes per token) and activations (~3.00 GB).
What is the best GPU for DeepSeek Coder V2 236B?
The top recommended GPU for DeepSeek Coder V2 236B is the B200 SXM (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $8522/month ($11.58/M tokens). Score: 100/100.
How much does DeepSeek Coder V2 236B inference cost?
DeepSeek Coder V2 236B API inference starts from $0.14/M input tokens and $0.28/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.