Skip to content
Updated minutes ago
01.AI

Yi-Lightning

01.AI · moe · 200B parameters · 16,384 context

Quality
50.0

Yi-Lightning is a 200B parameter Mixture-of-Experts (MoE) model with 22B active parameters per forward pass from 01.AI, featuring a 16,384 token context window. With 32 experts and 4 active per token, it achieves strong parameter efficiency while maintaining competitive quality scores. Based on InferenceBench analysis, the optimal deployment configuration is the B200 SXM (x2) at FP8 precision, achieving approximately 280.0 tokens/second at $11.58/million tokens.

Architecture Details

TypeMOE
Total Parameters200B
Active Parameters22B
Layers64
Hidden Dimension6,144
Attention Heads48
KV Heads8
Head Dimension128
Vocab Size64,000
Total Experts32
Active Experts4

Memory Requirements

BF16 Weights

400.0 GB

FP8 Weights

200.0 GB

INT4 Weights

100.0 GB

KV-Cache per Token131072 bytes
Activation Estimate2.00 GB

Fits on (single-node)

B200 SXM INT4B100 SXM INT4GB200 NVL72 (per GPU) INT4GB300 NVL72 (per GPU) INT4H200 SXM INT4H100 NVL 94GB (per GPU pair) INT4Instinct MI300X INT4Instinct MI325X FP8

GPU Recommendations

B200 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$8522

Cost/M Tokens

$11.58

Use this config →
B100 SXMoptimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$8541

Cost/M Tokens

$11.61

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 2 GPUs · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Cost/Month

$12337

Cost/M Tokens

$16.77

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
01ai$0.99$0.99
Cheapest

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglang

Supported Precisions

BF16 (default)FP8INT4

Similar Models

Frequently Asked Questions

How much VRAM does Yi-Lightning need for inference?

Yi-Lightning requires approximately 400.0 GB of VRAM at BF16 precision, 200.0 GB at FP8, or 100.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~2.00 GB).

What is the best GPU for Yi-Lightning?

The top recommended GPU for Yi-Lightning is the B200 SXM (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $8522/month ($11.58/M tokens). Score: 100/100.

How much does Yi-Lightning inference cost?

Yi-Lightning API inference starts from $0.99/M input tokens and $0.99/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.