Skip to content
Updated minutes ago
Mistral

Mixtral 8x7B

Mistral AI · moe · 46.7B parameters · 32,768 context

Quality
67.0

Parameters

46.7B

Context Window

32K tokens

Architecture

MoE

Best GPU

B200 SXM

Cheapest API

$0.50/M

Quality Score

67/100

Intelligence Brief

Mixtral 8x7B is a 46.7B parameter Mixture-of-Experts (8 experts, 2 active) model from Mistral AI, featuring Grouped Query Attention (GQA) with 32 layers and 4,096 hidden dimensions. With a 32,768 token context window, it supports structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 70.6, HumanEval 40.2, GSM8K 74.4. The most cost-effective API deployment is via fireworks at $0.50/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.

Architecture Details

TypeMOE
Total Parameters46.7B
Active Parameters12.9B
Layers32
Hidden Dimension4,096
Attention Heads32
KV Heads8
Head Dimension128
Vocab Size32,000
Total Experts8
Active Experts2

Memory Requirements

BF16 Weights

93.4 GB

FP8 Weights

46.7 GB

INT4 Weights

23.4 GB

KV-Cache per Token131072 bytes
Activation Estimate1.50 GB

GPU Compatibility Matrix

Mixtral 8x7B is compatible with 52% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$4261

Cost/M Tokens

$1.54

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$4271

Cost/M Tokens

$1.55

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$6169

Cost/M Tokens

$2.24

Use this config →

Deployment Options

API

API Deployment

fireworks

$0.50/M

output tokens

Self-Hosted

Single GPU

B200 SXM

$4261/mo

Min VRAM: 47 GB

Scale

Multi-GPU

A100 80GB SXM x2

834.1 tok/s

TP· $2259/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
fireworks$0.50$0.50
Cheapest
together$0.60$0.60

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
fireworksBest Value$0.50$0.50$5
together$0.60$0.60$6

Cost per 1,000 Requests

Short (500 tok)

$0.35

via fireworks

Medium (2K tok)

$1.40

via fireworks

Long (8K tok)

$5.00

via fireworks

Performance Estimates

Throughput by GPU

B200 SXM
1.1K tok/s
B100 SXM
1.1K tok/s
GB200 NVL72 (per GPU)
1.1K tok/s

VRAM Breakdown (B200 SXM, FP8)

Weights
Act
Weights 46.7 GBKV-Cache 1.1 GBActivations 12.0 GBOverhead 2.3 GB

Precision Impact

bf16

93.4 GB

weights/GPU

fp8

46.7 GB

weights/GPU

~1.1K tok/s

int4

23.4 GB

weights/GPU

Quality Benchmarks

Average
70th percentile across all models
MMLU
70.6
Below Average (33th pctile)
HumanEval
40.2
Below Average (29th pctile)
GSM8K
74.4
Below Average (35th pctile)
MT-Bench
76.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llmollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Mixtral 8x7B

Similar Models

Similar specsCompare →

Amazon Nova Pro

50B params · dense

Quality: 50

from $3.20/M

Larger context, Lower quality, More expensiveCompare →

Gemini 2.0 Flash

50B params · moe

Quality: 80

from $0.40/M

Larger context, Higher qualityCompare →

Gemini 1.5 Flash

50B params · moe

Quality: 75

from $0.30/M

Larger context, Higher quality, CheaperCompare →
Larger context, Higher qualityCompare →

Frequently Asked Questions

How much VRAM does Mixtral 8x7B need for inference?

Mixtral 8x7B requires approximately 93.4 GB of VRAM at BF16 precision, 46.7 GB at FP8, or 23.4 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~1.50 GB).

What is the best GPU for Mixtral 8x7B?

The top recommended GPU for Mixtral 8x7B is the B200 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $4261/month ($1.54/M tokens). Score: 100/100.

How much does Mixtral 8x7B inference cost?

Mixtral 8x7B API inference starts from $0.50/M input tokens and $0.50/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.