Skip to content
Updated minutes ago
MiniMax

MiniMax-Text-01

MiniMax · moe · 456B parameters · 1,048,576 context

Quality
50.0

MiniMax-Text-01 is a 456B parameter Mixture-of-Experts (MoE) model with 45.9B active parameters per forward pass from MiniMax, featuring a 1,048,576 token context window. With 32 experts and 2 active per token, it achieves strong parameter efficiency while maintaining competitive quality scores. Based on InferenceBench analysis, the optimal deployment configuration is the B200 NVL (pair) (x2) at FP8 precision, achieving approximately 280.0 tokens/second at $27.08/million tokens.

Architecture Details

TypeMOE
Total Parameters456B
Active Parameters45.9B
Layers80
Hidden Dimension6,144
Attention Heads48
KV Heads8
Head Dimension128
Vocab Size200,064
Total Experts32
Active Experts2

Memory Requirements

BF16 Weights

912.0 GB

FP8 Weights

456.0 GB

INT4 Weights

228.0 GB

KV-Cache per Token163840 bytes
Activation Estimate3.00 GB

Fits on (single-node)

B200 NVL (pair) INT4B300 INT4B200 SXMx2 INT4B100 SXMx2 INT4GB200 NVL72 (per GPU)x2 INT4GB300 NVL72 (per GPU)x2 INT4H200 SXMx2 INT4H100 NVL 94GB (per GPU pair)x2 INT4

GPU Recommendations

B200 NVL (pair)optimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$19929

Cost/M Tokens

$27.08

Use this config →
B200 SXMoptimal

FP8 · 4 GPUs · tensorrt-llm

98/100

score

Throughput

280.0 tok/s

Cost/Month

$17044

Cost/M Tokens

$23.16

Use this config →
B100 SXMoptimal

FP8 · 4 GPUs · tensorrt-llm

98/100

score

Throughput

280.0 tok/s

Cost/Month

$17082

Cost/M Tokens

$23.21

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
minimax$1.00$5.00
Cheapest

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglang

Supported Precisions

BF16FP8 (default)INT4

Similar Models

Frequently Asked Questions

How much VRAM does MiniMax-Text-01 need for inference?

MiniMax-Text-01 requires approximately 912.0 GB of VRAM at BF16 precision, 456.0 GB at FP8, or 228.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (163840 bytes per token) and activations (~3.00 GB).

What is the best GPU for MiniMax-Text-01?

The top recommended GPU for MiniMax-Text-01 is the B200 NVL (pair) (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $19929/month ($27.08/M tokens). Score: 100/100.

How much does MiniMax-Text-01 inference cost?

MiniMax-Text-01 API inference starts from $1.00/M input tokens and $5.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.