Skip to content
Updated minutes ago
OpenAI

o1

OpenAI · moe · 200B parameters · 200,000 context

Quality
93.0

Parameters

200B

Context Window

195K tokens

Architecture

MoE

Best GPU

B200 SXM

Cheapest API

$60.00/M

Quality Score

93/100

Intelligence Brief

o1 is a 200B parameter Mixture-of-Experts (16 experts, 2 active) model from OpenAI, featuring Grouped Query Attention (GQA) with 80 layers and 10,240 hidden dimensions. With a 200,000 token context window, it supports tools, vision, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 92.3, HumanEval 83.4, GSM8K 98. The most cost-effective API deployment is via openai at $60.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $17044/month.

Architecture Details

TypeMOE
Total Parameters200B
Active Parameters50B
Layers80
Hidden Dimension10,240
Attention Heads80
KV Heads10
Head Dimension128
Vocab Size200,000
Total Experts16
Active Experts2

Memory Requirements

BF16 Weights

400.0 GB

FP8 Weights

200.0 GB

INT4 Weights

100.0 GB

KV-Cache per Token204800 bytes
Activation Estimate4.00 GB

GPU Compatibility Matrix

o1 is compatible with 8% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMoptimal

BF16 · 4 GPUs · tensorrt-llm

93/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$17044

Cost/M Tokens

$23.16

Use this config →
B100 SXMoptimal

BF16 · 4 GPUs · tensorrt-llm

93/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$17082

Cost/M Tokens

$23.21

Use this config →
B200 NVL (pair)optimal

BF16 · 2 GPUs · tensorrt-llm

93/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$19929

Cost/M Tokens

$27.08

Use this config →

Deployment Options

API

API Deployment

openai

$60.00/M

output tokens

Self-Hosted

Single GPU

Requires multi-GPU setup (200 GB VRAM needed)

Scale

Multi-GPU

B200 SXM x4

280.0 tok/s

TP· $17044/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
openai$15.00$60.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
openaiBest Value$15.00$60.00$375

Cost per 1,000 Requests

Short (500 tok)

$19.50

via openai

Medium (2K tok)

$78.00

via openai

Long (8K tok)

$240.00

via openai

Performance Estimates

Throughput by GPU

B200 SXM
280.0 tok/s
B100 SXM
280.0 tok/s
B200 NVL (pair)
280.0 tok/s

VRAM Breakdown (B200 SXM, BF16)

Weights
Act
Weights 100.0 GBKV-Cache 6.7 GBActivations 32.0 GBOverhead 5.0 GB

Quality Benchmarks

Top 10%
99th percentile across all models
MMLU
92.3
Top 10% (99th pctile)
HumanEval
83.4
Top 10% (98th pctile)
GSM8K
98.0
Top 10% (99th pctile)
MT-Bench
91.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

Supported Precisions

BF16 (default)

Where to Deploy o1

Similar Models

o1-mini

70B params · dense

Quality: 83

from $12.00/M

Lower quality, Cheaper, Smaller modelCompare →

Claude Opus 4

200B params · dense

Quality: 90

from $75.00/M

Similar specsCompare →

GPT-4o

200B params · moe

Quality: 85

from $10.00/M

Lower quality, CheaperCompare →

GPT-4 Turbo

200B params · moe

Quality: 80

from $30.00/M

Lower quality, CheaperCompare →

GLM-5

200B params · dense

Quality: 50

from $6.00/M

Lower quality, CheaperCompare →

Frequently Asked Questions

How much VRAM does o1 need for inference?

o1 requires approximately 400.0 GB of VRAM at BF16 precision, 200.0 GB at FP8, or 100.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (204800 bytes per token) and activations (~4.00 GB).

What is the best GPU for o1?

The top recommended GPU for o1 is the B200 SXM (x4) using BF16 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $17044/month ($23.16/M tokens). Score: 93/100.

How much does o1 inference cost?

o1 API inference starts from $15.00/M input tokens and $60.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.