Mixtral 8x7B
Mistral AI · moe · 46.7B parameters · 32,768 context
Parameters
46.7B
Context Window
32K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$0.50/M
Quality Score
67/100
Intelligence Brief
Mixtral 8x7B is a 46.7B parameter Mixture-of-Experts (8 experts, 2 active) model from Mistral AI, featuring Grouped Query Attention (GQA) with 32 layers and 4,096 hidden dimensions. With a 32,768 token context window, it supports structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 70.6, HumanEval 40.2, GSM8K 74.4. The most cost-effective API deployment is via fireworks at $0.50/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.
Architecture Details
Memory Requirements
BF16 Weights
93.4 GB
FP8 Weights
46.7 GB
INT4 Weights
23.4 GB
GPU Compatibility Matrix
Mixtral 8x7B is compatible with 52% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$4261
Cost/M Tokens
$1.54
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$4271
Cost/M Tokens
$1.55
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$6169
Cost/M Tokens
$2.24
Deployment Options
API Deployment
fireworks
$0.50/M
output tokens
Single GPU
B200 SXM
$4261/mo
Min VRAM: 47 GB
Multi-GPU
A100 80GB SXM x2
834.1 tok/s
TP· $2259/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| fireworks | $0.50 | $0.50 | Cheapest |
| together | $0.60 | $0.60 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| fireworksBest Value | $0.50 | $0.50 | $5 |
| together | $0.60 | $0.60 | $6 |
Cost per 1,000 Requests
Short (500 tok)
$0.35
via fireworks
Medium (2K tok)
$1.40
via fireworks
Long (8K tok)
$5.00
via fireworks
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
93.4 GB
weights/GPU
fp8
46.7 GB
weights/GPU
~1.1K tok/s
int4
23.4 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Mixtral 8x7B
Self-Hosted Infrastructure
Similar Models
Mixtral 8x7B Instruct
46.7B params · moe
Quality: 69
from $0.60/M
Amazon Nova Pro
50B params · dense
Quality: 50
from $3.20/M
Gemini 2.0 Flash
50B params · moe
Quality: 80
from $0.40/M
Gemini 1.5 Flash
50B params · moe
Quality: 75
from $0.30/M
Llama 3.1 Nemotron 51B
51B params · dense
Quality: 78
from $0.40/M
Frequently Asked Questions
How much VRAM does Mixtral 8x7B need for inference?
Mixtral 8x7B requires approximately 93.4 GB of VRAM at BF16 precision, 46.7 GB at FP8, or 23.4 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~1.50 GB).
What is the best GPU for Mixtral 8x7B?
The top recommended GPU for Mixtral 8x7B is the B200 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $4261/month ($1.54/M tokens). Score: 100/100.
How much does Mixtral 8x7B inference cost?
Mixtral 8x7B API inference starts from $0.50/M input tokens and $0.50/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.