o1-mini
OpenAI · dense · 70B parameters · 128,000 context
Parameters
70B
Context Window
125K tokens
Architecture
Dense
Best GPU
B100 SXM
Cheapest API
$12.00/M
Quality Score
83/100
Intelligence Brief
o1-mini is a 70B parameter DENSE model from OpenAI, featuring Grouped Query Attention (GQA) with 64 layers and 8,192 hidden dimensions. With a 128,000 token context window, it supports structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 85.2, HumanEval 78, GSM8K 94. The most cost-effective API deployment is via openai at $12.00/M output tokens. For self-hosted inference, B100 SXM delivers optimal throughput at $4271/month.
Architecture Details
Memory Requirements
BF16 Weights
140.0 GB
FP8 Weights
70.0 GB
INT4 Weights
35.0 GB
GPU Compatibility Matrix
o1-mini is compatible with 38% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · tensorrt-llm
98/100
score
Throughput
492.8 tok/s
Latency (ITL)
2.0ms
Est. TTFT
0ms
Cost/Month
$4271
Cost/M Tokens
$3.30
BF16 · 1 GPU · tensorrt-llm
98/100
score
Throughput
492.8 tok/s
Latency (ITL)
2.0ms
Est. TTFT
0ms
Cost/Month
$6169
Cost/M Tokens
$4.76
BF16 · 2 GPUs · tensorrt-llm
95/100
score
Throughput
450.7 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$1879
Cost/M Tokens
$1.59
Deployment Options
API Deployment
openai
$12.00/M
output tokens
Single GPU
B100 SXM
$4271/mo
Min VRAM: 70 GB
Multi-GPU
H20 x2
450.7 tok/s
TP· $1879/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| openai | $3.00 | $12.00 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| openaiBest Value | $3.00 | $12.00 | $75 |
Cost per 1,000 Requests
Short (500 tok)
$3.90
via openai
Medium (2K tok)
$15.60
via openai
Long (8K tok)
$48.00
via openai
Performance Estimates
Throughput by GPU
VRAM Breakdown (B100 SXM, BF16)
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy o1-mini
Self-Hosted Infrastructure
Similar Models
o1
200B params · moe
Quality: 93
from $60.00/M
Code Llama 70B
70B params · dense
Quality: 60
from $0.90/M
Llama 2 70B
70B params · dense
Quality: 62
from $0.90/M
WizardMath 70B
70B params · dense
Quality: 50
Claude Sonnet 4
70B params · dense
Quality: 86
from $15.00/M
Frequently Asked Questions
How much VRAM does o1-mini need for inference?
o1-mini requires approximately 140.0 GB of VRAM at BF16 precision, 70.0 GB at FP8, or 35.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~3.00 GB).
What is the best GPU for o1-mini?
The top recommended GPU for o1-mini is the B100 SXM using BF16 precision. It achieves approximately 492.8 tokens/sec at an estimated cost of $4271/month ($3.30/M tokens). Score: 98/100.
How much does o1-mini inference cost?
o1-mini API inference starts from $3.00/M input tokens and $12.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.