Skip to content
Updated minutes ago
DeepSeek

DeepSeek V3

DeepSeek · moe · 671B parameters · 131,072 context

Quality
81.0

Parameters

671B

Context Window

128K tokens

Architecture

MoE

Best GPU

B200 NVL (pair)

Cheapest API

$0.42/M

Quality Score

81/100

Intelligence Brief

DeepSeek V3 is a 671B parameter Mixture-of-Experts (256 experts, 8 active) model from DeepSeek, featuring Grouped Query Attention (GQA) with 61 layers and 7,168 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 87.1, HumanEval 65, GSM8K 89.3. The most cost-effective API deployment is via deepseek at $0.42/M output tokens. For self-hosted inference, B200 NVL (pair) delivers optimal throughput at $39858/month.

Architecture Details

TypeMOE
Total Parameters671B
Active Parameters37B
Layers61
Hidden Dimension7,168
Attention Heads128
KV Heads1
Head Dimension128
Vocab Size129,280
Total Experts256
Active Experts8

Memory Requirements

BF16 Weights

1342.0 GB

FP8 Weights

671.0 GB

INT4 Weights

335.5 GB

KV-Cache per Token31232 bytes
Activation Estimate3.00 GB

This model requires multi-GPU deployment. Minimum: 2x Groq LPU (230GB each) with Tensor Parallelism.

GPU Compatibility Matrix

DeepSeek V3 is compatible with 1% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 NVL (pair)optimal

FP8 · 4 GPUs · tensorrt-llm

98/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$39858

Cost/M Tokens

$108.33

Use this config →
B200 SXMoptimal

FP8 · 8 GPUs · tensorrt-llm

93/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$34088

Cost/M Tokens

$92.65

Use this config →
H200 SXMoptimal

FP8 · 8 GPUs · tensorrt-llm

90/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$20422

Cost/M Tokens

$55.51

Use this config →

Deployment Options

API

API Deployment

deepseek

$0.42/M

output tokens

Self-Hosted

Single GPU

Requires multi-GPU setup (671 GB VRAM needed)

Scale

Multi-GPU

B200 NVL (pair) x4

140.0 tok/s

TP· $39858/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
deepseek$0.28$0.42
Cheapest
together$0.50$2.80

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
deepseekBest Value$0.28$0.42$4
together$0.50$2.80$17

Cost per 1,000 Requests

Short (500 tok)

$0.22

via deepseek

Medium (2K tok)

$0.90

via deepseek

Long (8K tok)

$3.08

via deepseek

Performance Estimates

Throughput by GPU

B200 NVL (pair)
140.0 tok/s
B200 SXM
140.0 tok/s
H200 SXM
140.0 tok/s

VRAM Breakdown (B200 NVL (pair), FP8)

Weights
Weights 167.8 GBKV-Cache 0.3 GBActivations 24.0 GBOverhead 8.4 GB

Precision Impact

bf16

335.5 GB

weights/GPU

fp8

167.8 GB

weights/GPU

~140.0 tok/s

int4

83.9 GB

weights/GPU

Quality Benchmarks

Above Average
89th percentile across all models
MMLU
87.1
Above Average (84th pctile)
HumanEval
65.0
Above Average (77th pctile)
GSM8K
89.3
Average (66th pctile)
MT-Bench
87.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy DeepSeek V3

Similar Models

DeepSeek V3-0324

685B params · moe

Quality: 81

from $0.42/M

Similar specsCompare →

DeepSeek R1

671B params · moe

Quality: 88

from $2.19/M

Higher quality, More expensiveCompare →

Gemini 2.0 Pro

600B params · moe

Quality: 88

from $4.00/M

Larger context, Higher quality, More expensiveCompare →

Grok 3

600B params · moe

Quality: 90

from $15.00/M

Higher quality, More expensiveCompare →
Smaller context, Lower qualityCompare →

Frequently Asked Questions

How much VRAM does DeepSeek V3 need for inference?

DeepSeek V3 requires approximately 1342.0 GB of VRAM at BF16 precision, 671.0 GB at FP8, or 335.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (31232 bytes per token) and activations (~3.00 GB).

What is the best GPU for DeepSeek V3?

The top recommended GPU for DeepSeek V3 is the B200 NVL (pair) (x4) using FP8 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $39858/month ($108.33/M tokens). Score: 98/100.

How much does DeepSeek V3 inference cost?

DeepSeek V3 API inference starts from $0.28/M input tokens and $0.42/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.