Skip to content
Updated minutes ago
OpenAI

Whisper Large V3

OpenAI · dense · 1.55B parameters · 448 context

Quality
50.0

Parameters

1.55B

Context Window

0K tokens

Architecture

Dense

Best GPU

A4000

Cheapest API

$0.01/M

Intelligence Brief

Whisper Large V3 is a 1.55B parameter DENSE model from OpenAI, featuring Multi-Head Attention (MHA) with 32 layers and 1,280 hidden dimensions. With a 448 token context window, it supports multilingual. The most cost-effective API deployment is via openai at $0.01/M output tokens. For self-hosted inference, A4000 delivers optimal throughput at $161/month.

Architecture Details

TypeDENSE
Total Parameters1.55B
Active Parameters1.55B
Layers32
Hidden Dimension1,280
Attention Heads20
KV Heads20
Head Dimension64
Vocab Size51,866

Memory Requirements

BF16 Weights

3.1 GB

FP8 Weights

1.6 GB

INT4 Weights

0.8 GB

KV-Cache per Token163840 bytes
Activation Estimate0.30 GB

GPU Compatibility Matrix

Whisper Large V3 is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

A4000optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

741.3 tok/s

Latency (ITL)

1.3ms

Est. TTFT

0ms

Cost/Month

$161

Cost/M Tokens

$0.08

Use this config →
RTX 4080optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

1.2K tok/s

Latency (ITL)

0.8ms

Est. TTFT

0ms

Cost/Month

$304

Cost/M Tokens

$0.10

Use this config →
RTX 4070 Tioptimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

834.0 tok/s

Latency (ITL)

1.2ms

Est. TTFT

0ms

Cost/Month

$237

Cost/M Tokens

$0.11

Use this config →

Deployment Options

API

API Deployment

openai

$0.01/M

output tokens

Self-Hosted

Single GPU

A4000

$161/mo

Min VRAM: 2 GB

Scale

Multi-GPU

A4000

741.3 tok/s

Best available config

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
openai$0.01$0.01
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
openaiBest Value$0.01$0.01$0

Cost per 1,000 Requests

Short (500 tok)

$0.00

via openai

Medium (2K tok)

$0.02

via openai

Long (8K tok)

$0.06

via openai

Performance Estimates

Throughput by GPU

A4000
741.3 tok/s
RTX 4080
1.2K tok/s
RTX 4070 Ti
834.0 tok/s

VRAM Breakdown (A4000, BF16)

Weights
KV
Act
Weights 3.1 GBKV-Cache 2.7 GBActivations 2.4 GBOverhead 0.2 GB

Precision Impact

bf16

3.1 GB

weights/GPU

~741.3 tok/s

fp8

1.6 GB

weights/GPU

int4

0.8 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Whisper Large V3

Similar Models

Frequently Asked Questions

How much VRAM does Whisper Large V3 need for inference?

Whisper Large V3 requires approximately 3.1 GB of VRAM at BF16 precision, 1.6 GB at FP8, or 0.8 GB at INT4 quantization. Additional VRAM is needed for KV-cache (163840 bytes per token) and activations (~0.30 GB).

What is the best GPU for Whisper Large V3?

The top recommended GPU for Whisper Large V3 is the A4000 using BF16 precision. It achieves approximately 741.3 tokens/sec at an estimated cost of $161/month ($0.08/M tokens). Score: 90/100.

How much does Whisper Large V3 inference cost?

Whisper Large V3 API inference starts from $0.01/M input tokens and $0.01/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.