Skip to content

Qwen 3 30B-A3B vs Qwen 2.5 7B

Alibaba
Qwen 3 30B-A3B

Alibaba · 30.5B params · Quality: 70

Alibaba
Qwen 2.5 7B

Alibaba · 7.6B params · Quality: 70

Architecture Comparison

SpecQwen 3 30B-A3BQwen 2.5 7B
TypeMOEDENSE
Total Parameters30.5B7.6B
Active Parameters3.3B7.6B
Layers4828
Hidden Dimension2,0483,584
Attention Heads3228
KV Heads44
Context Length131,072131,072
Precision (default)BF16BF16
Total Experts128N/A
Active Experts8N/A

Memory Requirements

PrecisionQwen 3 30B-A3BQwen 2.5 7B
BF16 Weights61.0 GB15.2 GB
FP8 Weights30.5 GB7.6 GB
INT4 Weights15.3 GB3.8 GB
KV-Cache / Token24576 B57344 B
Activation Estimate0.50 GB1.00 GB

Minimum GPUs Needed (BF16)

H100 SXM1 GPU1 GPU
L40S2 GPUs1 GPU

Quality Benchmarks

BenchmarkQwen 3 30B-A3BQwen 2.5 7B
Overall7070
MMLU75.074.2
HumanEval48.042.8
GSM8K80.082.0
MT-Bench78.079.0

Qwen 3 30B-A3B

MMLU
75.0
HumanEval
48.0
GSM8K
80.0
MT-Bench
78.0

Qwen 2.5 7B

MMLU
74.2
HumanEval
42.8
GSM8K
82.0
MT-Bench
79.0

Capabilities

FeatureQwen 3 30B-A3BQwen 2.5 7B
Tool Use✓ Yes✓ Yes
Vision✗ No✗ No
Code✓ Yes✓ Yes
Math✓ Yes✓ Yes
Reasoning✓ Yes✗ No
Multilingual✓ Yes✓ Yes
Structured Output✓ Yes✓ Yes

API Pricing Comparison

Cheapest Output (Qwen 3 30B-A3B)

N/A

Cheapest Output (Qwen 2.5 7B)

$0.20/M

Input: $0.20/M

ProviderQwen 3 30B-A3B In $/MOut $/MQwen 2.5 7B In $/MOut $/M
together$0.20$0.20
fireworks$0.20$0.20

Recommendation Summary

  • Qwen 2.5 7B has a smaller memory footprint (15.2 GB vs 61.0 GB BF16), making it easier to deploy on fewer GPUs.
  • Qwen 3 30B-A3B uses MOE architecture while Qwen 2.5 7B uses DENSE. MoE models activate fewer parameters per token, improving inference efficiency.
  • Qwen 3 30B-A3B is stronger at code generation (HumanEval: 48.0 vs 42.8).
  • Qwen 2.5 7B is better at math reasoning (GSM8K: 82.0 vs 80.0).

Compare Other Models