Skip to content
Updated minutes ago
Meta

Llama 4 Behemoth

Meta · moe · 2000B parameters · 1,048,576 context

Quality
93.0

Parameters

2.0T

Context Window

1024K tokens

Architecture

MoE

Best GPU

B200 SXM

Cheapest API

$16.00/M

Quality Score

93/100

Intelligence Brief

Llama 4 Behemoth is a 2000B parameter Mixture-of-Experts (256 experts, 16 active) model from Meta, featuring Grouped Query Attention (GQA) with 128 layers and 16,384 hidden dimensions. With a 1,048,576 token context window, it supports tools, vision, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 92, HumanEval 74, GSM8K 97. The most cost-effective API deployment is via together at $16.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $136352/month.

Architecture Details

TypeMOE
Total Parameters2000B
Active Parameters400B
Layers128
Hidden Dimension16,384
Attention Heads128
KV Heads16
Head Dimension128
Vocab Size202,400
Total Experts256
Active Experts16

Memory Requirements

BF16 Weights

4000.0 GB

FP8 Weights

2000.0 GB

INT4 Weights

1000.0 GB

KV-Cache per Token4194304 bytes
Activation Estimate25.00 GB

This model requires multi-GPU deployment. Minimum: 4x B200 NVL (pair) (360GB each) with Tensor Parallelism.

GPU Compatibility Matrix

Llama 4 Behemoth is compatible with 0% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMgood

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$136352

Cost/M Tokens

$370.60

Use this config →
B100 SXMgood

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$136656

Cost/M Tokens

$371.43

Use this config →
GB200 NVL72 (per GPU)good

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$197392

Cost/M Tokens

$536.51

Use this config →

Deployment Options

API

API Deployment

together

$16.00/M

output tokens

Self-Hosted

Single GPU

Requires multi-GPU setup (2000 GB VRAM needed)

Scale

Multi-GPU

B200 SXM x32

140.0 tok/s

TP· $136352/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$5.00$16.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
togetherBest Value$5.00$16.00$105

Cost per 1,000 Requests

Short (500 tok)

$5.70

via together

Medium (2K tok)

$22.80

via together

Long (8K tok)

$72.00

via together

Performance Estimates

Throughput by GPU

B200 SXM
140.0 tok/s
B100 SXM
140.0 tok/s
GB200 NVL72 (per GPU)
140.0 tok/s

VRAM Breakdown (B200 SXM, BF16)

Weights
Act
Weights 125.0 GBKV-Cache 17.2 GBActivations 200.0 GBOverhead 6.3 GB

Quality Benchmarks

Top 10%
99th percentile across all models
MMLU
92.0
Top 10% (98th pctile)
HumanEval
74.0
Top 10% (92th pctile)
GSM8K
97.0
Top 10% (94th pctile)
MT-Bench
92.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

Supported Precisions

BF16 (default)

Where to Deploy Llama 4 Behemoth

Similar Models

GPT-4.5 Preview

1500B params · moe

Quality: 93

from $150.00/M

Smaller context, More expensiveCompare →

Kimi K2.5

1000B params · moe

Quality: 50

from $2.40/M

Smaller context, Lower quality, CheaperCompare →

DeepSeek V3-0324

685B params · moe

Quality: 81

from $0.42/M

Smaller context, Lower quality, Cheaper, Smaller modelCompare →

DeepSeek R1

671B params · moe

Quality: 88

from $2.19/M

Smaller context, Cheaper, Smaller modelCompare →

DeepSeek V3

671B params · moe

Quality: 81

from $0.42/M

Smaller context, Lower quality, Cheaper, Smaller modelCompare →

Frequently Asked Questions

How much VRAM does Llama 4 Behemoth need for inference?

Llama 4 Behemoth requires approximately 4000.0 GB of VRAM at BF16 precision, 2000.0 GB at FP8, or 1000.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (4194304 bytes per token) and activations (~25.00 GB).

What is the best GPU for Llama 4 Behemoth?

The top recommended GPU for Llama 4 Behemoth is the B200 SXM (x32) using BF16 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $136352/month ($370.60/M tokens). Score: 63/100.

How much does Llama 4 Behemoth inference cost?

Llama 4 Behemoth API inference starts from $5.00/M input tokens and $16.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.