Qwen 3 235B
Alibaba · moe · 235B parameters · 131,072 context
Parameters
235B
Context Window
128K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$3.00/M
Quality Score
83/100
Intelligence Brief
Qwen 3 235B is a 235B parameter Mixture-of-Experts (128 experts, 8 active) model from Alibaba, featuring Grouped Query Attention (GQA) with 94 layers and 5,120 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 88, HumanEval 62, GSM8K 94. The most cost-effective API deployment is via together at $3.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $8522/month.
Architecture Details
Memory Requirements
BF16 Weights
470.0 GB
FP8 Weights
235.0 GB
INT4 Weights
117.5 GB
Fits on (single GPU) — most practical first
Fits on (multi-GPU with Tensor Parallelism)
Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.
GPU Compatibility Matrix
Qwen 3 235B is compatible with 8% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8522
Cost/M Tokens
$11.58
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8541
Cost/M Tokens
$11.61
FP8 · 2 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$12337
Cost/M Tokens
$16.77
Deployment Options
API Deployment
together
$3.00/M
output tokens
Single GPU
B200 NVL (pair)
$9965/mo
Min VRAM: 235 GB
Multi-GPU
B200 SXM x2
280.0 tok/s
TP· $8522/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $1.50 | $3.00 | Cheapest |
| fireworks | $1.80 | $3.50 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $1.50 | $3.00 | $23 |
| fireworks | $1.80 | $3.50 | $27 |
Cost per 1,000 Requests
Short (500 tok)
$1.35
via together
Medium (2K tok)
$5.40
via together
Long (8K tok)
$18.00
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
235.0 GB
weights/GPU
fp8
117.5 GB
weights/GPU
~280.0 tok/s
int4
58.8 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Qwen 3 235B
Self-Hosted Infrastructure
Similar Models
DeepSeek Coder V2 236B
236B params · moe
Quality: 50
from $0.28/M
DeepSeek V2.5
236B params · moe
Quality: 78
from $0.28/M
Nemotron Ultra 253B
253B params · dense
Quality: 86
from $6.00/M
Claude Opus 4
200B params · dense
Quality: 90
from $75.00/M
GPT-4o
200B params · moe
Quality: 85
from $10.00/M
Frequently Asked Questions
How much VRAM does Qwen 3 235B need for inference?
Qwen 3 235B requires approximately 470.0 GB of VRAM at BF16 precision, 235.0 GB at FP8, or 117.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (192512 bytes per token) and activations (~3.00 GB).
What is the best GPU for Qwen 3 235B?
The top recommended GPU for Qwen 3 235B is the B200 SXM (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $8522/month ($11.58/M tokens). Score: 100/100.
How much does Qwen 3 235B inference cost?
Qwen 3 235B API inference starts from $1.50/M input tokens and $3.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.