Skip to content
Updated minutes ago
DeepSeek

DeepSeek V3-0324

DeepSeek · moe · 685B parameters · 131,072 context

Quality
50.0

DeepSeek V3-0324 is a 685B parameter Mixture-of-Experts (MoE) model with 37B active parameters per forward pass from DeepSeek, featuring a 131,072 token context window. With 256 experts and 8 active per token, it achieves strong parameter efficiency while maintaining competitive quality scores. Based on InferenceBench analysis, the optimal deployment configuration is the B200 NVL (pair) (x4) at FP8 precision, achieving approximately 140.0 tokens/second at $108.33/million tokens.

Architecture Details

TypeMOE
Total Parameters685B
Active Parameters37B
Layers61
Hidden Dimension7,168
Attention Heads128
KV Heads1
Head Dimension128
Vocab Size129,280
Total Experts256
Active Experts8

Memory Requirements

BF16 Weights

1370.0 GB

FP8 Weights

685.0 GB

INT4 Weights

342.5 GB

KV-Cache per Token31232 bytes
Activation Estimate3.00 GB

Fits on (single-node)

Instinct MI325Xx2 INT4B200 NVL (pair)x2 INT4B300x2 INT4Groq LPUx2 INT4B200 SXMx3 INT4B100 SXMx3 INT4GB200 NVL72 (per GPU)x3 INT4GB300 NVL72 (per GPU)x3 INT4

GPU Recommendations

B200 NVL (pair)optimal

FP8 · 4 GPUs · tensorrt-llm

98/100

score

Throughput

140.0 tok/s

Cost/Month

$39858

Cost/M Tokens

$108.33

Use this config →
B200 SXMoptimal

FP8 · 8 GPUs · tensorrt-llm

93/100

score

Throughput

140.0 tok/s

Cost/Month

$34088

Cost/M Tokens

$92.65

Use this config →
H200 SXMoptimal

FP8 · 8 GPUs · tensorrt-llm

90/100

score

Throughput

140.0 tok/s

Cost/Month

$20422

Cost/M Tokens

$55.51

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
deepseek$0.28$0.42
Cheapest
together$0.50$2.80

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtensorrt-llm

Supported Precisions

BF16FP8 (default)INT4

Similar Models

Frequently Asked Questions

How much VRAM does DeepSeek V3-0324 need for inference?

DeepSeek V3-0324 requires approximately 1370.0 GB of VRAM at BF16 precision, 685.0 GB at FP8, or 342.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (31232 bytes per token) and activations (~3.00 GB).

What is the best GPU for DeepSeek V3-0324?

The top recommended GPU for DeepSeek V3-0324 is the B200 NVL (pair) (x4) using FP8 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $39858/month ($108.33/M tokens). Score: 98/100.

How much does DeepSeek V3-0324 inference cost?

DeepSeek V3-0324 API inference starts from $0.28/M input tokens and $0.42/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.