DeepSeek Math 7B
DeepSeek · dense · 7.24B parameters · 4,096 context
Parameters
7.24B
Context Window
4K tokens
Architecture
Dense
Best GPU
A10G
Intelligence Brief
DeepSeek Math 7B is a 7.24B parameter DENSE model from DeepSeek, featuring Multi-Head Attention (MHA) with 30 layers and 4,096 hidden dimensions. With a 4,096 token context window, it supports code, math, reasoning. For self-hosted inference, A10G delivers optimal throughput at $285/month.
Architecture Details
Memory Requirements
BF16 Weights
14.5 GB
FP8 Weights
7.2 GB
INT4 Weights
3.6 GB
GPU Compatibility Matrix
DeepSeek Math 7B is compatible with 95% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
223.7 tok/s
Latency (ITL)
4.5ms
Est. TTFT
1ms
Cost/Month
$285
Cost/M Tokens
$0.48
BF16 · 1 GPU · vllm
100/100
score
Throughput
347.9 tok/s
Latency (ITL)
2.9ms
Est. TTFT
0ms
Cost/Month
$332
Cost/M Tokens
$0.36
BF16 · 1 GPU · vllm
100/100
score
Throughput
375.9 tok/s
Latency (ITL)
2.7ms
Est. TTFT
0ms
Cost/Month
$370
Cost/M Tokens
$0.37
Deployment Options
API Deployment
No API pricing available
Single GPU
A10G
$285/mo
Min VRAM: 7 GB
Multi-GPU
RTX 3080 x2
439.3 tok/s
TP· $266/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (A10G, BF16)
Precision Impact
bf16
14.5 GB
weights/GPU
~223.7 tok/s
fp8
7.2 GB
weights/GPU
int4
3.6 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy DeepSeek Math 7B
Self-Hosted Infrastructure
Similar Models
Prometheus 2 7B
7.24B params · dense
Quality: 50
NV EmbedQA Mistral 7B
7.24B params · dense
Quality: 50
from $0.01/M
Falcon Mamba 7B
7.27B params · hybrid
Quality: 50
from $0.15/M
BioMistral 7B
7.2B params · dense
Quality: 50
SaulLM 7B
7.2B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does DeepSeek Math 7B need for inference?
DeepSeek Math 7B requires approximately 14.5 GB of VRAM at BF16 precision, 7.2 GB at FP8, or 3.6 GB at INT4 quantization. Additional VRAM is needed for KV-cache (491520 bytes per token) and activations (~0.80 GB).
What is the best GPU for DeepSeek Math 7B?
The top recommended GPU for DeepSeek Math 7B is the A10G using BF16 precision. It achieves approximately 223.7 tokens/sec at an estimated cost of $285/month ($0.48/M tokens). Score: 100/100.
How much does DeepSeek Math 7B inference cost?
DeepSeek Math 7B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.