Updated minutes ago
Mixtral 8x7B Instruct
Mistral AI · moe · 46.7B parameters · 32,768 context
Quality69.0
Architecture Details
TypeMOE
Total Parameters46.7B
Active Parameters12.9B
Layers32
Hidden Dimension4,096
Attention Heads32
KV Heads8
Head Dimension128
Vocab Size32,000
Total Experts8
Active Experts2
Memory Requirements
BF16 Weights
93.4 GB
FP8 Weights
46.7 GB
INT4 Weights
23.4 GB
KV-Cache per Token131072 bytes
Activation Estimate1.50 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM FP8H100 PCIe FP8H100 NVL FP8
GPU Recommendations
B200 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Cost/Month
$4261
Cost/M Tokens
$1.54
B100 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Cost/Month
$4271
Cost/M Tokens
$1.55
GB200 NVL72 (per GPU)optimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Cost/Month
$6169
Cost/M Tokens
$2.24
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $0.60 | $0.60 | Cheapest |
Quality Benchmarks
MMLU72.6
HumanEval42.0
GSM8K76.0
MT-Bench78.0
Capabilities
Features
✓ Tool Use✗ Vision✓ Code✓ Math✗ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
vllmsglangtgitensorrt-llmollama
Supported Precisions
BF16 (default)FP8INT4