Yi-Large
01.AI · moe · 102.6B parameters · 32,768 context
Parameters
102.6B
Context Window
32K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$3.00/M
Quality Score
74/100
Intelligence Brief
Yi-Large is a 102.6B parameter Mixture-of-Experts (32 experts, 4 active) model from 01.AI, featuring Grouped Query Attention (GQA) with 64 layers and 8,192 hidden dimensions. With a 32,768 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 78, HumanEval 47, GSM8K 82. The most cost-effective API deployment is via 01ai at $3.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.
Architecture Details
Memory Requirements
BF16 Weights
205.2 GB
FP8 Weights
102.6 GB
INT4 Weights
51.3 GB
GPU Compatibility Matrix
Yi-Large is compatible with 21% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$4261
Cost/M Tokens
$5.79
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$4271
Cost/M Tokens
$5.80
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$6169
Cost/M Tokens
$8.38
Deployment Options
API Deployment
01ai
$3.00/M
output tokens
Single GPU
B200 SXM
$4261/mo
Min VRAM: 103 GB
Multi-GPU
H100 SXM x2
280.0 tok/s
TP· $3587/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| 01ai | $3.00 | $3.00 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| 01aiBest Value | $3.00 | $3.00 | $30 |
Cost per 1,000 Requests
Short (500 tok)
$2.10
via 01ai
Medium (2K tok)
$8.40
via 01ai
Long (8K tok)
$30.00
via 01ai
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
205.2 GB
weights/GPU
fp8
102.6 GB
weights/GPU
~280.0 tok/s
int4
51.3 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Yi-Large
Self-Hosted Infrastructure
Similar Models
Yi-Lightning
200B params · moe
Quality: 50
from $0.99/M
Command R+
104B params · dense
Quality: 68
from $2.00/M
Inflection 3
100B params · dense
Quality: 74
from $15.00/M
YaLM 100B
100B params · dense
Quality: 50
Llama 4 Scout
109B params · moe
Quality: 73
from $0.30/M
Frequently Asked Questions
How much VRAM does Yi-Large need for inference?
Yi-Large requires approximately 205.2 GB of VRAM at BF16 precision, 102.6 GB at FP8, or 51.3 GB at INT4 quantization. Additional VRAM is needed for KV-cache (262144 bytes per token) and activations (~2.50 GB).
What is the best GPU for Yi-Large?
The top recommended GPU for Yi-Large is the B200 SXM using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $4261/month ($5.79/M tokens). Score: 100/100.
How much does Yi-Large inference cost?
Yi-Large API inference starts from $3.00/M input tokens and $3.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.