Qwen 3 4B
Alibaba · dense · 4B parameters · 131,072 context
Parameters
4B
Context Window
128K tokens
Architecture
Dense
Best GPU
A4000
Cheapest API
$0.10/M
Quality Score
57/100
Intelligence Brief
Qwen 3 4B is a 4B parameter DENSE model from Alibaba, featuring Grouped Query Attention (GQA) with 36 layers and 2,560 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 64, HumanEval 35, GSM8K 65. The most cost-effective API deployment is via together at $0.10/M output tokens. For self-hosted inference, A4000 delivers optimal throughput at $161/month.
Architecture Details
Memory Requirements
BF16 Weights
8.0 GB
FP8 Weights
4.0 GB
INT4 Weights
2.0 GB
GPU Compatibility Matrix
Qwen 3 4B is compatible with 98% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
302.4 tok/s
Latency (ITL)
3.3ms
Est. TTFT
1ms
Cost/Month
$161
Cost/M Tokens
$0.20
BF16 · 1 GPU · vllm
100/100
score
Throughput
483.9 tok/s
Latency (ITL)
2.1ms
Est. TTFT
0ms
Cost/Month
$304
Cost/M Tokens
$0.24
BF16 · 1 GPU · vllm
100/100
score
Throughput
340.2 tok/s
Latency (ITL)
2.9ms
Est. TTFT
1ms
Cost/Month
$237
Cost/M Tokens
$0.27
Deployment Options
API Deployment
together
$0.10/M
output tokens
Single GPU
A4000
$161/mo
Min VRAM: 4 GB
Multi-GPU
RTX 3070 x2
434.9 tok/s
TP· $171/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $0.10 | $0.10 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $0.10 | $0.10 | $1 |
Cost per 1,000 Requests
Short (500 tok)
$0.07
via together
Medium (2K tok)
$0.28
via together
Long (8K tok)
$1.00
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (A4000, BF16)
Precision Impact
bf16
8.0 GB
weights/GPU
~302.4 tok/s
fp8
4.0 GB
weights/GPU
int4
2.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Qwen 3 4B
Self-Hosted Infrastructure
Similar Models
Qwen 3 1.7B
1.7B params · dense
Quality: 50
Qwen 3 8B
8.2B params · dense
Quality: 70
from $0.20/M
Minitron 4B
4B params · dense
Quality: 50
from $0.06/M
Nemotron Mini 4B
4B params · dense
Quality: 48
from $0.06/M
Phi 3 Mini 3.8B
3.8B params · dense
Quality: 64
Frequently Asked Questions
How much VRAM does Qwen 3 4B need for inference?
Qwen 3 4B requires approximately 8.0 GB of VRAM at BF16 precision, 4.0 GB at FP8, or 2.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (147456 bytes per token) and activations (~0.50 GB).
What is the best GPU for Qwen 3 4B?
The top recommended GPU for Qwen 3 4B is the A4000 using BF16 precision. It achieves approximately 302.4 tokens/sec at an estimated cost of $161/month ($0.20/M tokens). Score: 100/100.
How much does Qwen 3 4B inference cost?
Qwen 3 4B API inference starts from $0.10/M input tokens and $0.10/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.