Llama 4 Maverick
Meta · moe · 400B parameters · 1,048,576 context
Parameters
400B
Context Window
1024K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$1.80/M
Quality Score
84/100
Intelligence Brief
Llama 4 Maverick is a 400B parameter Mixture-of-Experts (128 experts, 1 active) model from Meta, featuring Grouped Query Attention (GQA) with 96 layers and 5,120 hidden dimensions. With a 1,048,576 token context window, it supports tools, vision, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 89, HumanEval 63, GSM8K 95. The most cost-effective API deployment is via together at $1.80/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $17044/month.
Architecture Details
Memory Requirements
BF16 Weights
800.0 GB
FP8 Weights
400.0 GB
INT4 Weights
200.0 GB
Fits on (single GPU) — most practical first
Fits on (multi-GPU with Tensor Parallelism)
Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.
GPU Compatibility Matrix
Llama 4 Maverick is compatible with 2% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 4 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$17044
Cost/M Tokens
$23.16
FP8 · 4 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$17082
Cost/M Tokens
$23.21
FP8 · 4 GPUs · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$10211
Cost/M Tokens
$13.88
Deployment Options
API Deployment
together
$1.80/M
output tokens
Single GPU
Requires multi-GPU setup (400 GB VRAM needed)
Multi-GPU
B200 SXM x4
280.0 tok/s
TP· $17044/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $1.20 | $1.80 | Cheapest |
| fireworks | $1.50 | $2.00 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $1.20 | $1.80 | $15 |
| fireworks | $1.50 | $2.00 | $18 |
Cost per 1,000 Requests
Short (500 tok)
$0.96
via together
Medium (2K tok)
$3.84
via together
Long (8K tok)
$13.20
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
200.0 GB
weights/GPU
fp8
100.0 GB
weights/GPU
~280.0 tok/s
int4
50.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Llama 4 Maverick
Self-Hosted Infrastructure
Similar Models
Jamba 1.5 Large
398B params · hybrid
Quality: 50
from $8.00/M
Llama 3.1 405B
405B params · dense
Quality: 81
from $3.00/M
Snowflake Arctic 128x3B
395B params · moe
Quality: 50
MiniMax-Text-01
456B params · moe
Quality: 50
from $5.00/M
MiniMax M2.7
456B params · moe
Quality: 82
from $2.80/M
Frequently Asked Questions
How much VRAM does Llama 4 Maverick need for inference?
Llama 4 Maverick requires approximately 800.0 GB of VRAM at BF16 precision, 400.0 GB at FP8, or 200.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (393216 bytes per token) and activations (~3.00 GB).
What is the best GPU for Llama 4 Maverick?
The top recommended GPU for Llama 4 Maverick is the B200 SXM (x4) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $17044/month ($23.16/M tokens). Score: 100/100.
How much does Llama 4 Maverick inference cost?
Llama 4 Maverick API inference starts from $1.20/M input tokens and $1.80/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.