Skip to content
Updated minutes ago
OpenAI

o1-mini

OpenAI · dense · 70B parameters · 128,000 context

Quality
83.0

Parameters

70B

Context Window

125K tokens

Architecture

Dense

Best GPU

B100 SXM

Cheapest API

$12.00/M

Quality Score

83/100

Intelligence Brief

o1-mini is a 70B parameter DENSE model from OpenAI, featuring Grouped Query Attention (GQA) with 64 layers and 8,192 hidden dimensions. With a 128,000 token context window, it supports structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 85.2, HumanEval 78, GSM8K 94. The most cost-effective API deployment is via openai at $12.00/M output tokens. For self-hosted inference, B100 SXM delivers optimal throughput at $4271/month.

Architecture Details

TypeDENSE
Total Parameters70B
Active Parameters70B
Layers64
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size200,000

Memory Requirements

BF16 Weights

140.0 GB

FP8 Weights

70.0 GB

INT4 Weights

35.0 GB

KV-Cache per Token131072 bytes
Activation Estimate3.00 GB

GPU Compatibility Matrix

o1-mini is compatible with 38% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B100 SXMoptimal

BF16 · 1 GPU · tensorrt-llm

98/100

score

Throughput

492.8 tok/s

Latency (ITL)

2.0ms

Est. TTFT

0ms

Cost/Month

$4271

Cost/M Tokens

$3.30

Use this config →
GB200 NVL72 (per GPU)optimal

BF16 · 1 GPU · tensorrt-llm

98/100

score

Throughput

492.8 tok/s

Latency (ITL)

2.0ms

Est. TTFT

0ms

Cost/Month

$6169

Cost/M Tokens

$4.76

Use this config →
H20optimal

BF16 · 2 GPUs · tensorrt-llm

95/100

score

Throughput

450.7 tok/s

Latency (ITL)

2.2ms

Est. TTFT

0ms

Cost/Month

$1879

Cost/M Tokens

$1.59

Use this config →

Deployment Options

API

API Deployment

openai

$12.00/M

output tokens

Self-Hosted

Single GPU

B100 SXM

$4271/mo

Min VRAM: 70 GB

Scale

Multi-GPU

H20 x2

450.7 tok/s

TP· $1879/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
openai$3.00$12.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
openaiBest Value$3.00$12.00$75

Cost per 1,000 Requests

Short (500 tok)

$3.90

via openai

Medium (2K tok)

$15.60

via openai

Long (8K tok)

$48.00

via openai

Performance Estimates

Throughput by GPU

B100 SXM
492.8 tok/s
GB200 NVL72 (per GPU)
492.8 tok/s
H20
450.7 tok/s

VRAM Breakdown (B100 SXM, BF16)

Weights
Weights 140.0 GBKV-Cache 4.3 GBActivations 24.0 GBOverhead 7.0 GB

Quality Benchmarks

Top 10%
91th percentile across all models
MMLU
85.2
Above Average (77th pctile)
HumanEval
78.0
Top 10% (95th pctile)
GSM8K
94.0
Above Average (84th pctile)
MT-Bench
85.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

Supported Precisions

BF16 (default)

Where to Deploy o1-mini

Similar Models

o1

200B params · moe

Quality: 93

from $60.00/M

Larger context, Higher quality, More expensive, Larger modelCompare →

Code Llama 70B

70B params · dense

Quality: 60

from $0.90/M

Smaller context, Lower quality, CheaperCompare →

Llama 2 70B

70B params · dense

Quality: 62

from $0.90/M

Smaller context, Lower quality, CheaperCompare →
Smaller context, Lower qualityCompare →

Claude Sonnet 4

70B params · dense

Quality: 86

from $15.00/M

Larger contextCompare →

Frequently Asked Questions

How much VRAM does o1-mini need for inference?

o1-mini requires approximately 140.0 GB of VRAM at BF16 precision, 70.0 GB at FP8, or 35.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~3.00 GB).

What is the best GPU for o1-mini?

The top recommended GPU for o1-mini is the B100 SXM using BF16 precision. It achieves approximately 492.8 tokens/sec at an estimated cost of $4271/month ($3.30/M tokens). Score: 98/100.

How much does o1-mini inference cost?

o1-mini API inference starts from $3.00/M input tokens and $12.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.