Skip to content
Updated minutes ago
OpenAI

GPT-4.5 Preview

OpenAI · moe · 1500B parameters · 128,000 context

Quality
93.0

Parameters

1.5T

Context Window

125K tokens

Architecture

MoE

Best GPU

B200 SXM

Cheapest API

$150.00/M

Quality Score

93/100

Intelligence Brief

GPT-4.5 Preview is a 1500B parameter Mixture-of-Experts (16 experts, 2 active) model from OpenAI, featuring Grouped Query Attention (GQA) with 120 layers and 16,384 hidden dimensions. With a 128,000 token context window, it supports tools, vision, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 90, HumanEval 76, GSM8K 96. The most cost-effective API deployment is via openai at $150.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $136352/month.

Architecture Details

TypeMOE
Total Parameters1500B
Active Parameters300B
Layers120
Hidden Dimension16,384
Attention Heads128
KV Heads16
Head Dimension128
Vocab Size200,000
Total Experts16
Active Experts2

Memory Requirements

BF16 Weights

3000.0 GB

FP8 Weights

1500.0 GB

INT4 Weights

750.0 GB

KV-Cache per Token3932160 bytes
Activation Estimate20.00 GB

This model requires multi-GPU deployment. Minimum: 3x B200 NVL (pair) (360GB each) with Tensor Parallelism.

GPU Compatibility Matrix

GPT-4.5 Preview is compatible with 0% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMgood

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$136352

Cost/M Tokens

$370.60

Use this config →
B100 SXMgood

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$136656

Cost/M Tokens

$371.43

Use this config →
GB200 NVL72 (per GPU)good

BF16 · 32 GPUs · tensorrt-llm

63/100

score

Throughput

140.0 tok/s

Latency (ITL)

7.1ms

Est. TTFT

1ms

Cost/Month

$197392

Cost/M Tokens

$536.51

Use this config →

Deployment Options

API

API Deployment

openai

$150.00/M

output tokens

Self-Hosted

Single GPU

Requires multi-GPU setup (1500 GB VRAM needed)

Scale

Multi-GPU

B200 SXM x32

140.0 tok/s

TP· $136352/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
openai$75.00$150.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
openaiBest Value$75.00$150.00$1125

Cost per 1,000 Requests

Short (500 tok)

$67.50

via openai

Medium (2K tok)

$270.00

via openai

Long (8K tok)

$900.00

via openai

Performance Estimates

Throughput by GPU

B200 SXM
140.0 tok/s
B100 SXM
140.0 tok/s
GB200 NVL72 (per GPU)
140.0 tok/s

VRAM Breakdown (B200 SXM, BF16)

Weights
Act
Weights 93.8 GBKV-Cache 16.1 GBActivations 160.0 GBOverhead 4.7 GB

Quality Benchmarks

Top 10%
99th percentile across all models
MMLU
90.0
Top 10% (92th pctile)
HumanEval
76.0
Top 10% (94th pctile)
GSM8K
96.0
Top 10% (92th pctile)
MT-Bench
91.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

Supported Precisions

BF16 (default)

Where to Deploy GPT-4.5 Preview

Similar Models

Kimi K2.5

1000B params · moe

Quality: 50

from $2.40/M

Lower quality, CheaperCompare →

Llama 4 Behemoth

2000B params · moe

Quality: 93

from $16.00/M

Larger context, CheaperCompare →

DeepSeek V3-0324

685B params · moe

Quality: 81

from $0.42/M

Lower quality, Cheaper, Smaller modelCompare →

DeepSeek R1

671B params · moe

Quality: 88

from $2.19/M

Cheaper, Smaller modelCompare →

DeepSeek V3

671B params · moe

Quality: 81

from $0.42/M

Lower quality, Cheaper, Smaller modelCompare →

Frequently Asked Questions

How much VRAM does GPT-4.5 Preview need for inference?

GPT-4.5 Preview requires approximately 3000.0 GB of VRAM at BF16 precision, 1500.0 GB at FP8, or 750.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (3932160 bytes per token) and activations (~20.00 GB).

What is the best GPU for GPT-4.5 Preview?

The top recommended GPU for GPT-4.5 Preview is the B200 SXM (x32) using BF16 precision. It achieves approximately 140.0 tokens/sec at an estimated cost of $136352/month ($370.60/M tokens). Score: 63/100.

How much does GPT-4.5 Preview inference cost?

GPT-4.5 Preview API inference starts from $75.00/M input tokens and $150.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.