Skip to content
Updated minutes ago
xAI

Grok-3 Mini

xAI · dense · 33B parameters · 131,072 context

Quality
78.0

Parameters

33B

Context Window

128K tokens

Architecture

Dense

Best GPU

H20

Cheapest API

$0.50/M

Quality Score

78/100

Intelligence Brief

Grok-3 Mini is a 33B parameter DENSE model from xAI, featuring Grouped Query Attention (GQA) with 64 layers and 5,120 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 85, HumanEval 72, GSM8K 88. The most cost-effective API deployment is via xai at $0.50/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $940/month.

Architecture Details

TypeDENSE
Total Parameters33B
Active Parameters33B
Layers64
Hidden Dimension5,120
Attention Heads40
KV Heads8
Head Dimension128
Vocab Size131,072

Memory Requirements

BF16 Weights

66.0 GB

FP8 Weights

33.0 GB

INT4 Weights

16.5 GB

KV-Cache per Token262144 bytes
Activation Estimate2.00 GB

GPU Compatibility Matrix

Grok-3 Mini is compatible with 57% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.0K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.35

Use this config →
H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$2553

Cost/M Tokens

$0.93

Use this config →
H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

849.3 tok/s

Latency (ITL)

1.2ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.80

Use this config →

Deployment Options

API

API Deployment

xai

$0.50/M

output tokens

Self-Hosted

Single GPU

H20

$940/mo

Min VRAM: 33 GB

Scale

Multi-GPU

RTX A6000 x2

110.6 tok/s

TP· $930/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
xai$0.30$0.50
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
xaiBest Value$0.30$0.50$4

Cost per 1,000 Requests

Short (500 tok)

$0.25

via xai

Medium (2K tok)

$1.00

via xai

Long (8K tok)

$3.40

via xai

Performance Estimates

Throughput by GPU

H20
1.0K tok/s
H200 SXM
1.1K tok/s
H100 SXM
849.3 tok/s

VRAM Breakdown (H20, FP8)

Weights
Act
Weights 33.0 GBKV-Cache 2.1 GBActivations 16.0 GBOverhead 1.7 GB

Precision Impact

bf16

66.0 GB

weights/GPU

fp8

33.0 GB

weights/GPU

~1.0K tok/s

Quality Benchmarks

Above Average
84th percentile across all models
MMLU
85.0
Above Average (75th pctile)
HumanEval
72.0
Top 10% (91th pctile)
GSM8K
88.0
Average (63th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllm

Supported Precisions

BF16 (default)FP8

Where to Deploy Grok-3 Mini

Similar Models

Smaller context, Lower quality, More expensiveCompare →
Smaller context, Lower qualityCompare →
Smaller context, Lower qualityCompare →
Higher qualityCompare →

Qwen 3 32B

32.8B params · dense

Quality: 74

from $0.80/M

More expensiveCompare →

Frequently Asked Questions

How much VRAM does Grok-3 Mini need for inference?

Grok-3 Mini requires approximately 66.0 GB of VRAM at BF16 precision, 33.0 GB at FP8, or 16.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (262144 bytes per token) and activations (~2.00 GB).

What is the best GPU for Grok-3 Mini?

The top recommended GPU for Grok-3 Mini is the H20 using FP8 precision. It achieves approximately 1.0K tokens/sec at an estimated cost of $940/month ($0.35/M tokens). Score: 100/100.

How much does Grok-3 Mini inference cost?

Grok-3 Mini API inference starts from $0.30/M input tokens and $0.50/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.