Skip to content
Updated minutes ago
xAI

Grok-2

xAI · moe · 314B parameters · 131,072 context

Quality
78.0

Parameters

314B

Context Window

128K tokens

Architecture

MoE

Best GPU

H200 SXM

Cheapest API

$10.00/M

Quality Score

78/100

Intelligence Brief

Grok-2 is a 314B parameter Mixture-of-Experts (8 experts, 2 active) model from xAI, featuring Grouped Query Attention (GQA) with 64 layers and 8,192 hidden dimensions. With a 131,072 token context window, it supports tools, vision, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 87.5, HumanEval 64, GSM8K 93. The most cost-effective API deployment is via xai at $10.00/M output tokens. For self-hosted inference, H200 SXM delivers optimal throughput at $10211/month.

Architecture Details

TypeMOE
Total Parameters314B
Active Parameters50B
Layers64
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size131,072
Total Experts8
Active Experts2

Memory Requirements

BF16 Weights

628.0 GB

FP8 Weights

314.0 GB

INT4 Weights

157.0 GB

KV-Cache per Token262144 bytes
Activation Estimate3.00 GB

Fits on (multi-GPU with Tensor Parallelism)

Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.

GPU Compatibility Matrix

Grok-2 is compatible with 7% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H200 SXMoptimal

FP8 · 4 GPUs · tensorrt-llm

95/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$10211

Cost/M Tokens

$13.88

Use this config →
B100 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

93/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$8541

Cost/M Tokens

$11.61

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 2 GPUs · tensorrt-llm

93/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$12337

Cost/M Tokens

$16.77

Use this config →

Deployment Options

API

API Deployment

xai

$10.00/M

output tokens

Self-Hosted

Single GPU

Requires multi-GPU setup (314 GB VRAM needed)

Scale

Multi-GPU

H200 SXM x4

280.0 tok/s

TP· $10211/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
xai$2.00$10.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
xaiBest Value$2.00$10.00$60

Cost per 1,000 Requests

Short (500 tok)

$3.00

via xai

Medium (2K tok)

$12.00

via xai

Long (8K tok)

$36.00

via xai

Performance Estimates

Throughput by GPU

H200 SXM
280.0 tok/s
B100 SXM
280.0 tok/s
GB200 NVL72 (per GPU)
280.0 tok/s

VRAM Breakdown (H200 SXM, FP8)

Weights
Act
Weights 78.5 GBKV-Cache 2.1 GBActivations 24.0 GBOverhead 3.9 GB

Precision Impact

bf16

157.0 GB

weights/GPU

fp8

78.5 GB

weights/GPU

~280.0 tok/s

Quality Benchmarks

Above Average
84th percentile across all models
MMLU
87.5
Above Average (86th pctile)
HumanEval
64.0
Above Average (76th pctile)
GSM8K
93.0
Above Average (81th pctile)
MT-Bench
88.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllm

Supported Precisions

BF16 (default)FP8

Where to Deploy Grok-2

Similar Models

Grok-3

314B params · dense

Quality: 91

from $15.00/M

Higher qualityCompare →

Grok 3

600B params · moe

Quality: 90

from $15.00/M

Higher quality, Larger modelCompare →

Nemotron 340B

340B params · dense

Quality: 85

from $4.20/M

Higher quality, CheaperCompare →
Higher quality, CheaperCompare →
Lower quality, CheaperCompare →

Frequently Asked Questions

How much VRAM does Grok-2 need for inference?

Grok-2 requires approximately 628.0 GB of VRAM at BF16 precision, 314.0 GB at FP8, or 157.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (262144 bytes per token) and activations (~3.00 GB).

What is the best GPU for Grok-2?

The top recommended GPU for Grok-2 is the H200 SXM (x4) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $10211/month ($13.88/M tokens). Score: 95/100.

How much does Grok-2 inference cost?

Grok-2 API inference starts from $2.00/M input tokens and $10.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.