Skip to content
Updated minutes ago
Mistral

Mixtral 8x7B Instruct

Mistral AI · moe · 46.7B parameters · 32,768 context

Quality
69.0

Architecture Details

TypeMOE
Total Parameters46.7B
Active Parameters12.9B
Layers32
Hidden Dimension4,096
Attention Heads32
KV Heads8
Head Dimension128
Vocab Size32,000
Total Experts8
Active Experts2

Memory Requirements

BF16 Weights

93.4 GB

FP8 Weights

46.7 GB

INT4 Weights

23.4 GB

KV-Cache per Token131072 bytes
Activation Estimate1.50 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM FP8H100 PCIe FP8H100 NVL FP8

GPU Recommendations

B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$4261

Cost/M Tokens

$1.54

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$4271

Cost/M Tokens

$1.55

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$6169

Cost/M Tokens

$2.24

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$0.60$0.60
Cheapest

Quality Benchmarks

MMLU
72.6
HumanEval
42.0
GSM8K
76.0
MT-Bench
78.0

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llmollama

Supported Precisions

BF16 (default)FP8INT4

Similar Models