DeepSeek R1
DeepSeek · moe · 671B parameters · 131,072 context
Parameters
671B
Context Window
128K tokens
Architecture
MoE
Best GPU
B200 NVL (pair)
Cheapest API
$2.19/M
Quality Score
88/100
Intelligence Brief
DeepSeek R1 is a 671B parameter Mixture-of-Experts (256 experts, 8 active) model from DeepSeek, featuring Grouped Query Attention (GQA) with 61 layers and 7,168 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 90.8, HumanEval 71.7, GSM8K 97.3. The most cost-effective API deployment is via deepseek at $2.19/M output tokens. For self-hosted inference, B200 NVL (pair) delivers optimal throughput at $39858/month.
Architecture Details
Memory Requirements
BF16 Weights
1342.0 GB
FP8 Weights
671.0 GB
INT4 Weights
335.5 GB
Fits on (multi-GPU with Tensor Parallelism)
Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.
This model requires multi-GPU deployment. Minimum: 2x Groq LPU (230GB each) with Tensor Parallelism.
GPU Compatibility Matrix
DeepSeek R1 is compatible with 1% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 4 GPUs · tensorrt-llm
98/100
score
Throughput
140.0 tok/s
Latency (ITL)
7.1ms
Est. TTFT
1ms
Cost/Month
$39858
Cost/M Tokens
$108.33
FP8 · 8 GPUs · tensorrt-llm
93/100
score
Throughput
140.0 tok/s
Latency (ITL)
7.1ms
Est. TTFT
1ms
Cost/Month
$34088
Cost/M Tokens
$92.65
FP8 · 8 GPUs · tensorrt-llm
90/100
score
Throughput
140.0 tok/s
Latency (ITL)
7.1ms
Est. TTFT
1ms
Cost/Month
$20422
Cost/M Tokens
$55.51
Deployment Options
API Deployment
deepseek
$2.19/M
output tokens
Single GPU
Requires multi-GPU setup (671 GB VRAM needed)
Multi-GPU
B200 NVL (pair) x4
140.0 tok/s
TP· $39858/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepseek | $0.55 | $2.19 | Cheapest |
| together | $3.00 | $7.00 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| deepseekBest Value | $0.55 | $2.19 | $14 |
| together | $3.00 | $7.00 | $50 |
Cost per 1,000 Requests
Short (500 tok)
$0.71
via deepseek
Medium (2K tok)
$2.85
via deepseek
Long (8K tok)
$8.78
via deepseek
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 NVL (pair), FP8)
Precision Impact
bf16
335.5 GB
weights/GPU
fp8
167.8 GB
weights/GPU
~140.0 tok/s
int4
83.9 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy DeepSeek R1
Similar Models
DeepSeek V3
671B params · moe
Quality: 81
from $0.42/M
DeepSeek V3-0324
685B params · moe
Quality: 81
from $0.42/M
Gemini 2.0 Pro
600B params · moe
Quality: 88
from $4.00/M
Grok 3
600B params · moe
Quality: 90
from $15.00/M
Megatron-Turing NLG 530B
530B params · dense
Quality: 58
Frequently Asked Questions
How much VRAM does DeepSeek R1 need for inference?
DeepSeek R1 requires approximately 1342.0 GB of VRAM at BF16 precision, 671.0 GB at FP8, or 335.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (31232 bytes per token) and activations (~3.00 GB).
What is the best GPU for DeepSeek R1?
The top recommended GPU for DeepSeek R1 is the B200 NVL (pair) (x4) using FP8 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $39858/month ($108.33/M tokens). Score: 98/100.
How much does DeepSeek R1 inference cost?
DeepSeek R1 API inference starts from $0.55/M input tokens and $2.19/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.