Skip to content

DeepSeek V3 vs Qwen 2.5 72B

DeepSeek
DeepSeek V3

DeepSeek · 671B params · Quality: 86

Alibaba
Qwen 2.5 72B

Alibaba · 72.7B params · Quality: 84

Architecture Comparison

SpecDeepSeek V3Qwen 2.5 72B
TypeMOEDENSE
Total Parameters671B72.7B
Active Parameters37B72.7B
Layers6180
Hidden Dimension7,1688,192
Attention Heads12864
KV Heads18
Context Length131,072131,072
Precision (default)BF16BF16
Total Experts256N/A
Active Experts8N/A

Memory Requirements

PrecisionDeepSeek V3Qwen 2.5 72B
BF16 Weights1342.0 GB145.4 GB
FP8 Weights671.0 GB72.7 GB
INT4 Weights335.5 GB36.4 GB
KV-Cache / Token31232 B327680 B
Activation Estimate3.00 GB2.50 GB

Minimum GPUs Needed (BF16)

H100 SXMN/A3 GPUs
L40SN/A4 GPUs

Quality Benchmarks

BenchmarkDeepSeek V3Qwen 2.5 72B
Overall8684
MMLU87.185.3
HumanEval65.056.0
GSM8K89.391.6
MT-Bench87.086.0

DeepSeek V3

MMLU
87.1
HumanEval
65.0
GSM8K
89.3
MT-Bench
87.0

Qwen 2.5 72B

MMLU
85.3
HumanEval
56.0
GSM8K
91.6
MT-Bench
86.0

Capabilities

FeatureDeepSeek V3Qwen 2.5 72B
Tool Use✓ Yes✓ Yes
Vision✗ No✗ No
Code✓ Yes✓ Yes
Math✓ Yes✓ Yes
Reasoning✗ No✗ No
Multilingual✓ Yes✓ Yes
Structured Output✓ Yes✓ Yes

API Pricing Comparison

Cheapest Output (DeepSeek V3)

$0.42/M

Input: $0.28/M

Cheapest Output (Qwen 2.5 72B)

$0.90/M

Input: $0.90/M

ProviderDeepSeek V3 In $/MOut $/MQwen 2.5 72B In $/MOut $/M
deepseek$0.28$0.42
together$0.50$2.80$0.90$0.90
fireworks$0.90$0.90

Recommendation Summary

  • DeepSeek V3 scores higher on overall quality (86 vs 84).
  • DeepSeek V3 is cheaper per output token ($0.42/M vs $0.90/M).
  • Qwen 2.5 72B has a smaller memory footprint (145.4 GB vs 1342.0 GB BF16), making it easier to deploy on fewer GPUs.
  • DeepSeek V3 uses MOE architecture while Qwen 2.5 72B uses DENSE. MoE models activate fewer parameters per token, improving inference efficiency.
  • DeepSeek V3 is stronger at code generation (HumanEval: 65.0 vs 56.0).
  • Qwen 2.5 72B is better at math reasoning (GSM8K: 91.6 vs 89.3).

Compare Other Models