Skip to content

Mistral Large 2 vs Llama 3.1 70B

Mistral
Mistral Large 2

Mistral AI · 123B params · Quality: 82

Meta
Llama 3.1 70B

Meta · 70.6B params · Quality: 82

Architecture Comparison

SpecMistral Large 2Llama 3.1 70B
TypeDENSEDENSE
Total Parameters123B70.6B
Active Parameters123B70.6B
Layers8880
Hidden Dimension12,2888,192
Attention Heads9664
KV Heads88
Context Length131,072131,072
Precision (default)BF16BF16

Memory Requirements

PrecisionMistral Large 2Llama 3.1 70B
BF16 Weights246.0 GB141.2 GB
FP8 Weights123.0 GB70.6 GB
INT4 Weights61.5 GB35.3 GB
KV-Cache / Token360448 B327680 B
Activation Estimate3.50 GB2.50 GB

Minimum GPUs Needed (BF16)

H100 SXM4 GPUs3 GPUs
L40S7 GPUs4 GPUs

Quality Benchmarks

BenchmarkMistral Large 2Llama 3.1 70B
Overall8282
MMLU84.083.6
HumanEval53.058.5
GSM8K91.293.0
MT-Bench84.085.0

Mistral Large 2

MMLU
84.0
HumanEval
53.0
GSM8K
91.2
MT-Bench
84.0

Llama 3.1 70B

MMLU
83.6
HumanEval
58.5
GSM8K
93.0
MT-Bench
85.0

Capabilities

FeatureMistral Large 2Llama 3.1 70B
Tool Use✓ Yes✓ Yes
Vision✗ No✗ No
Code✓ Yes✓ Yes
Math✓ Yes✓ Yes
Reasoning✗ No✗ No
Multilingual✓ Yes✓ Yes
Structured Output✓ Yes✓ Yes

API Pricing Comparison

Cheapest Output (Mistral Large 2)

$2.50/M

Input: $2.50/M

Cheapest Output (Llama 3.1 70B)

$0.79/M

Input: $0.59/M

ProviderMistral Large 2 In $/MOut $/MLlama 3.1 70B In $/MOut $/M
groq$0.59$0.79
together$2.50$2.50$0.88$0.88
fireworks$0.90$0.90
mistral$2.00$6.00

Recommendation Summary

  • Llama 3.1 70B is cheaper per output token ($0.79/M vs $2.50/M).
  • Llama 3.1 70B has a smaller memory footprint (141.2 GB vs 246.0 GB BF16), making it easier to deploy on fewer GPUs.
  • Llama 3.1 70B is stronger at code generation (HumanEval: 58.5 vs 53.0).
  • Llama 3.1 70B is better at math reasoning (GSM8K: 93.0 vs 91.2).

Compare Other Models