DeepSeek V3-0324
DeepSeek · moe · 685B parameters · 131,072 context
DeepSeek V3-0324 is a 685B parameter Mixture-of-Experts (MoE) model with 37B active parameters per forward pass from DeepSeek, featuring a 131,072 token context window. With 256 experts and 8 active per token, it achieves strong parameter efficiency while maintaining competitive quality scores. Based on InferenceBench analysis, the optimal deployment configuration is the B200 NVL (pair) (x4) at FP8 precision, achieving approximately 140.0 tokens/second at $108.33/million tokens.
Architecture Details
Memory Requirements
BF16 Weights
1370.0 GB
FP8 Weights
685.0 GB
INT4 Weights
342.5 GB
Fits on (single-node)
GPU Recommendations
FP8 · 4 GPUs · tensorrt-llm
98/100
score
Throughput
140.0 tok/s
Cost/Month
$39858
Cost/M Tokens
$108.33
FP8 · 8 GPUs · tensorrt-llm
93/100
score
Throughput
140.0 tok/s
Cost/Month
$34088
Cost/M Tokens
$92.65
FP8 · 8 GPUs · tensorrt-llm
90/100
score
Throughput
140.0 tok/s
Cost/Month
$20422
Cost/M Tokens
$55.51
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepseek | $0.28 | $0.42 | Cheapest |
| together | $0.50 | $2.80 |
Capabilities
Features
Supported Frameworks
Supported Precisions
Similar Models
Frequently Asked Questions
How much VRAM does DeepSeek V3-0324 need for inference?
DeepSeek V3-0324 requires approximately 1370.0 GB of VRAM at BF16 precision, 685.0 GB at FP8, or 342.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (31232 bytes per token) and activations (~3.00 GB).
What is the best GPU for DeepSeek V3-0324?
The top recommended GPU for DeepSeek V3-0324 is the B200 NVL (pair) (x4) using FP8 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $39858/month ($108.33/M tokens). Score: 98/100.
How much does DeepSeek V3-0324 inference cost?
DeepSeek V3-0324 API inference starts from $0.28/M input tokens and $0.42/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.