Mixtral 8x22B
Mistral AI · moe · 141B parameters · 65,536 context
Parameters
141B
Context Window
64K tokens
Architecture
MoE
Best GPU
B100 SXM
Cheapest API
$1.20/M
Quality Score
65/100
Intelligence Brief
Mixtral 8x22B is a 141B parameter Mixture-of-Experts (8 experts, 2 active) model from Mistral AI, featuring Grouped Query Attention (GQA) with 56 layers and 6,144 hidden dimensions. With a 65,536 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 77.8, HumanEval 46, GSM8K 78.4. The most cost-effective API deployment is via together at $1.20/M output tokens. For self-hosted inference, B100 SXM delivers optimal throughput at $4271/month.
Architecture Details
Memory Requirements
BF16 Weights
282.0 GB
FP8 Weights
141.0 GB
INT4 Weights
70.5 GB
GPU Compatibility Matrix
Mixtral 8x22B is compatible with 20% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$4271
Cost/M Tokens
$5.80
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$6169
Cost/M Tokens
$8.38
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$7118
Cost/M Tokens
$9.67
Deployment Options
API Deployment
together
$1.20/M
output tokens
Single GPU
B100 SXM
$4271/mo
Min VRAM: 141 GB
Multi-GPU
H200 SXM x2
280.0 tok/s
TP· $5106/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $1.20 | $1.20 | Cheapest |
| mistral | $2.00 | $6.00 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $1.20 | $1.20 | $12 |
| mistral | $2.00 | $6.00 | $40 |
Cost per 1,000 Requests
Short (500 tok)
$0.84
via together
Medium (2K tok)
$3.36
via together
Long (8K tok)
$12.00
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (B100 SXM, FP8)
Precision Impact
bf16
282.0 GB
weights/GPU
fp8
141.0 GB
weights/GPU
~280.0 tok/s
int4
70.5 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Mixtral 8x22B
Self-Hosted Infrastructure
Similar Models
Mixtral 8x7B
46.7B params · moe
Quality: 67
from $0.50/M
Mixtral 8x7B Instruct
46.7B params · moe
Quality: 69
from $0.60/M
DBRX Base
132B params · moe
Quality: 50
from $2.25/M
DBRX Instruct
132B params · moe
Quality: 50
from $1.20/M
Mistral Large 2411
123B params · dense
Quality: 75
from $6.00/M
Frequently Asked Questions
How much VRAM does Mixtral 8x22B need for inference?
Mixtral 8x22B requires approximately 282.0 GB of VRAM at BF16 precision, 141.0 GB at FP8, or 70.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (229376 bytes per token) and activations (~2.50 GB).
What is the best GPU for Mixtral 8x22B?
The top recommended GPU for Mixtral 8x22B is the B100 SXM using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $4271/month ($5.80/M tokens). Score: 100/100.
How much does Mixtral 8x22B inference cost?
Mixtral 8x22B API inference starts from $1.20/M input tokens and $1.20/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.