Qwen 3 Coder 8B
Alibaba · dense · 8.2B parameters · 131,072 context
Parameters
8.2B
Context Window
128K tokens
Architecture
Dense
Best GPU
A30
Cheapest API
$0.15/M
Quality Score
74/100
Intelligence Brief
Qwen 3 Coder 8B is a 8.2B parameter DENSE model from Alibaba, featuring Grouped Query Attention (GQA) with 36 layers and 4,096 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math. On standardized benchmarks, it achieves MMLU 72, HumanEval 78. The most cost-effective API deployment is via alibaba at $0.15/M output tokens. For self-hosted inference, A30 delivers optimal throughput at $332/month.
Architecture Details
Memory Requirements
BF16 Weights
16.4 GB
FP8 Weights
8.2 GB
INT4 Weights
4.1 GB
GPU Compatibility Matrix
Qwen 3 Coder 8B is compatible with 90% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
307.2 tok/s
Latency (ITL)
3.3ms
Est. TTFT
1ms
Cost/Month
$332
Cost/M Tokens
$0.41
BF16 · 1 GPU · vllm
100/100
score
Throughput
331.9 tok/s
Latency (ITL)
3.0ms
Est. TTFT
1ms
Cost/Month
$370
Cost/M Tokens
$0.42
BF16 · 1 GPU · vllm
100/100
score
Throughput
308.2 tok/s
Latency (ITL)
3.2ms
Est. TTFT
1ms
Cost/Month
$180
Cost/M Tokens
$0.22
Deployment Options
API Deployment
alibaba
$0.15/M
output tokens
Single GPU
A30
$332/mo
Min VRAM: 8 GB
Multi-GPU
RTX 3060 x2
186.2 tok/s
TP· $114/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| alibaba | $0.15 | $0.15 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| alibabaBest Value | $0.15 | $0.15 | $2 |
Cost per 1,000 Requests
Short (500 tok)
$0.10
via alibaba
Medium (2K tok)
$0.42
via alibaba
Long (8K tok)
$1.50
via alibaba
Performance Estimates
Throughput by GPU
VRAM Breakdown (A30, BF16)
Precision Impact
bf16
16.4 GB
weights/GPU
~307.2 tok/s
fp8
8.2 GB
weights/GPU
int4
4.1 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Qwen 3 Coder 8B
Self-Hosted Infrastructure
Similar Models
Qwen 3 8B
8.2B params · dense
Quality: 70
from $0.20/M
Llama 3.1 8B
8.03B params · dense
Quality: 58
from $0.08/M
Hermes 3 8B
8.03B params · dense
Quality: 50
from $0.18/M
Aya 23 8B
8B params · dense
Quality: 50
from $0.60/M
DeepSeek R1 Distill 8B
8B params · dense
Quality: 88
from $0.20/M
Frequently Asked Questions
How much VRAM does Qwen 3 Coder 8B need for inference?
Qwen 3 Coder 8B requires approximately 16.4 GB of VRAM at BF16 precision, 8.2 GB at FP8, or 4.1 GB at INT4 quantization. Additional VRAM is needed for KV-cache (147456 bytes per token) and activations (~0.50 GB).
What is the best GPU for Qwen 3 Coder 8B?
The top recommended GPU for Qwen 3 Coder 8B is the A30 using BF16 precision. It achieves approximately 307.2 tokens/sec at an estimated cost of $332/month ($0.41/M tokens). Score: 100/100.
How much does Qwen 3 Coder 8B inference cost?
Qwen 3 Coder 8B API inference starts from $0.15/M input tokens and $0.15/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.