Skip to content

Mistral Large 2 vs Llama 3.1 8B

Mistral
Mistral Large 2

Mistral AI · 123B params · Quality: 82

Meta
Llama 3.1 8B

Meta · 8.03B params · Quality: 65

Architecture Comparison

SpecMistral Large 2Llama 3.1 8B
TypeDENSEDENSE
Total Parameters123B8.03B
Active Parameters123B8.03B
Layers8832
Hidden Dimension12,2884,096
Attention Heads9632
KV Heads88
Context Length131,072131,072
Precision (default)BF16BF16

Memory Requirements

PrecisionMistral Large 2Llama 3.1 8B
BF16 Weights246.0 GB16.1 GB
FP8 Weights123.0 GB8.0 GB
INT4 Weights61.5 GB4.0 GB
KV-Cache / Token360448 B131072 B
Activation Estimate3.50 GB1.00 GB

Minimum GPUs Needed (BF16)

H100 SXM4 GPUs1 GPU
L40S7 GPUs1 GPU

Quality Benchmarks

BenchmarkMistral Large 2Llama 3.1 8B
Overall8265
MMLU84.069.4
HumanEval53.040.2
GSM8K91.279.6
MT-Bench84.078.0

Mistral Large 2

MMLU
84.0
HumanEval
53.0
GSM8K
91.2
MT-Bench
84.0

Llama 3.1 8B

MMLU
69.4
HumanEval
40.2
GSM8K
79.6
MT-Bench
78.0

Capabilities

FeatureMistral Large 2Llama 3.1 8B
Tool Use✓ Yes✓ Yes
Vision✗ No✗ No
Code✓ Yes✓ Yes
Math✓ Yes✓ Yes
Reasoning✗ No✗ No
Multilingual✓ Yes✓ Yes
Structured Output✓ Yes✓ Yes

API Pricing Comparison

Cheapest Output (Mistral Large 2)

$2.50/M

Input: $2.50/M

Cheapest Output (Llama 3.1 8B)

$0.08/M

Input: $0.05/M

ProviderMistral Large 2 In $/MOut $/MLlama 3.1 8B In $/MOut $/M
groq$0.05$0.08
together$2.50$2.50$0.18$0.18
fireworks$0.20$0.20
mistral$2.00$6.00

Recommendation Summary

  • Mistral Large 2 scores higher on overall quality (82 vs 65).
  • Llama 3.1 8B is cheaper per output token ($0.08/M vs $2.50/M).
  • Llama 3.1 8B has a smaller memory footprint (16.1 GB vs 246.0 GB BF16), making it easier to deploy on fewer GPUs.
  • Mistral Large 2 is stronger at code generation (HumanEval: 53.0 vs 40.2).
  • Mistral Large 2 is better at math reasoning (GSM8K: 91.2 vs 79.6).

Compare Other Models