Skip to content
Updated minutes ago
Mistral

Mixtral 8x22B

Mistral AI · moe · 141B parameters · 65,536 context

Quality
65.0

Parameters

141B

Context Window

64K tokens

Architecture

MoE

Best GPU

B100 SXM

Cheapest API

$1.20/M

Quality Score

65/100

Intelligence Brief

Mixtral 8x22B is a 141B parameter Mixture-of-Experts (8 experts, 2 active) model from Mistral AI, featuring Grouped Query Attention (GQA) with 56 layers and 6,144 hidden dimensions. With a 65,536 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 77.8, HumanEval 46, GSM8K 78.4. The most cost-effective API deployment is via together at $1.20/M output tokens. For self-hosted inference, B100 SXM delivers optimal throughput at $4271/month.

Architecture Details

TypeMOE
Total Parameters141B
Active Parameters39B
Layers56
Hidden Dimension6,144
Attention Heads48
KV Heads8
Head Dimension128
Vocab Size32,768
Total Experts8
Active Experts2

Memory Requirements

BF16 Weights

282.0 GB

FP8 Weights

141.0 GB

INT4 Weights

70.5 GB

KV-Cache per Token229376 bytes
Activation Estimate2.50 GB

GPU Compatibility Matrix

Mixtral 8x22B is compatible with 20% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$4271

Cost/M Tokens

$5.80

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$6169

Cost/M Tokens

$8.38

Use this config →
GB300 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$7118

Cost/M Tokens

$9.67

Use this config →

Deployment Options

API

API Deployment

together

$1.20/M

output tokens

Self-Hosted

Single GPU

B100 SXM

$4271/mo

Min VRAM: 141 GB

Scale

Multi-GPU

H200 SXM x2

280.0 tok/s

TP· $5106/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$1.20$1.20
Cheapest
mistral$2.00$6.00

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
togetherBest Value$1.20$1.20$12
mistral$2.00$6.00$40

Cost per 1,000 Requests

Short (500 tok)

$0.84

via together

Medium (2K tok)

$3.36

via together

Long (8K tok)

$12.00

via together

Performance Estimates

Throughput by GPU

B100 SXM
280.0 tok/s
GB200 NVL72 (per GPU)
280.0 tok/s
GB300 NVL72 (per GPU)
280.0 tok/s

VRAM Breakdown (B100 SXM, FP8)

Weights
Weights 141.0 GBKV-Cache 1.9 GBActivations 20.0 GBOverhead 7.1 GB

Precision Impact

bf16

282.0 GB

weights/GPU

fp8

141.0 GB

weights/GPU

~280.0 tok/s

int4

70.5 GB

weights/GPU

Quality Benchmarks

Average
67th percentile across all models
MMLU
77.8
Below Average (49th pctile)
HumanEval
46.0
Below Average (40th pctile)
GSM8K
78.4
Below Average (45th pctile)
MT-Bench
80.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Mixtral 8x22B

Similar Models

Mixtral 8x7B

46.7B params · moe

Quality: 67

from $0.50/M

Cheaper, Smaller modelCompare →
Cheaper, Smaller modelCompare →

DBRX Base

132B params · moe

Quality: 50

from $2.25/M

Lower quality, More expensiveCompare →

DBRX Instruct

132B params · moe

Quality: 50

from $1.20/M

Lower qualityCompare →
Larger context, Higher quality, More expensiveCompare →

Frequently Asked Questions

How much VRAM does Mixtral 8x22B need for inference?

Mixtral 8x22B requires approximately 282.0 GB of VRAM at BF16 precision, 141.0 GB at FP8, or 70.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (229376 bytes per token) and activations (~2.50 GB).

What is the best GPU for Mixtral 8x22B?

The top recommended GPU for Mixtral 8x22B is the B100 SXM using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $4271/month ($5.80/M tokens). Score: 100/100.

How much does Mixtral 8x22B inference cost?

Mixtral 8x22B API inference starts from $1.20/M input tokens and $1.20/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.