Inflection 3
Inflection AI · dense · 100B parameters · 8,192 context
Parameters
100B
Context Window
8K tokens
Architecture
Dense
Best GPU
B200 SXM
Cheapest API
$15.00/M
Quality Score
74/100
Intelligence Brief
Inflection 3 is a 100B parameter DENSE model from Inflection AI, featuring Grouped Query Attention (GQA) with 72 layers and 10,240 hidden dimensions. With a 8,192 token context window, it supports structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 78, HumanEval 48, GSM8K 80. The most cost-effective API deployment is via inflection at $15.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $8522/month.
Architecture Details
Memory Requirements
BF16 Weights
200.0 GB
FP8 Weights
100.0 GB
INT4 Weights
50.0 GB
GPU Compatibility Matrix
Inflection 3 is compatible with 21% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 2 GPUs · tensorrt-llm
93/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8522
Cost/M Tokens
$11.58
BF16 · 2 GPUs · tensorrt-llm
93/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$8541
Cost/M Tokens
$11.61
BF16 · 2 GPUs · tensorrt-llm
90/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$5106
Cost/M Tokens
$6.94
Deployment Options
API Deployment
inflection
$15.00/M
output tokens
Single GPU
B200 NVL (pair)
$9965/mo
Min VRAM: 100 GB
Multi-GPU
B200 SXM x2
280.0 tok/s
TP· $8522/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| inflection | $5.00 | $15.00 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| inflectionBest Value | $5.00 | $15.00 | $100 |
Cost per 1,000 Requests
Short (500 tok)
$5.50
via inflection
Medium (2K tok)
$22.00
via inflection
Long (8K tok)
$70.00
via inflection
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, BF16)
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Inflection 3
Self-Hosted Infrastructure
Similar Models
YaLM 100B
100B params · dense
Quality: 50
Yi-Large
102.6B params · moe
Quality: 74
from $3.00/M
Command R+
104B params · dense
Quality: 68
from $2.00/M
Llama 4 Scout
109B params · moe
Quality: 73
from $0.30/M
Llama 3.2 90B Vision
90B params · dense
Quality: 84
from $0.90/M
Frequently Asked Questions
How much VRAM does Inflection 3 need for inference?
Inflection 3 requires approximately 200.0 GB of VRAM at BF16 precision, 100.0 GB at FP8, or 50.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (184320 bytes per token) and activations (~3.50 GB).
What is the best GPU for Inflection 3?
The top recommended GPU for Inflection 3 is the B200 SXM (x2) using BF16 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $8522/month ($11.58/M tokens). Score: 93/100.
How much does Inflection 3 inference cost?
Inflection 3 API inference starts from $5.00/M input tokens and $15.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.