Skip to content
Updated minutes ago
Black Forest Labs

FLUX.2

Black Forest Labs · dense · 12B parameters · 4,096 context

Quality
50.0

Parameters

12B

Context Window

4K tokens

Architecture

Dense

Best GPU

A100 40GB SXM

Intelligence Brief

FLUX.2 is a 12B parameter DENSE model from Black Forest Labs, featuring Multi-Head Attention (MHA) with 28 layers and 3,072 hidden dimensions. With a 4,096 token context window, it supports vision. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.

Architecture Details

TypeDENSE
Total Parameters12B
Active Parameters12B
Layers28
Hidden Dimension3,072
Attention Heads24
KV Heads24
Head Dimension128
Vocab Size49,408

Memory Requirements

BF16 Weights

24.0 GB

FP8 Weights

12.0 GB

INT4 Weights

6.0 GB

KV-Cache per Token196608 bytes
Activation Estimate1.00 GB

GPU Compatibility Matrix

FLUX.2 is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

A100 40GB SXMoptimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

349.9 tok/s

Latency (ITL)

2.9ms

Est. TTFT

0ms

Cost/Month

$807

Cost/M Tokens

$0.88

Use this config →
RTX A6000optimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

172.8 tok/s

Latency (ITL)

5.8ms

Est. TTFT

1ms

Cost/Month

$465

Cost/M Tokens

$1.02

Use this config →
A40optimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

156.6 tok/s

Latency (ITL)

6.4ms

Est. TTFT

1ms

Cost/Month

$399

Cost/M Tokens

$0.97

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

A100 40GB SXM

$807/mo

Min VRAM: 12 GB

Scale

Multi-GPU

RTX 3090 x2

343.7 tok/s

TP· $361/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

A100 40GB SXM
349.9 tok/s
RTX A6000
172.8 tok/s
A40
156.6 tok/s

VRAM Breakdown (A100 40GB SXM, BF16)

Weights
Act
Weights 24.0 GBKV-Cache 5.6 GBActivations 8.0 GBOverhead 1.9 GB

Precision Impact

bf16

24.0 GB

weights/GPU

~349.9 tok/s

fp8

12.0 GB

weights/GPU

int4

6.0 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy FLUX.2

Similar Models

FLUX.1 Dev

12B params · dense

Quality: 50

from $25.00/M

Smaller contextCompare →
Larger contextCompare →

Gemma 3 12B

12B params · dense

Quality: 71

from $0.10/M

Larger context, Higher qualityCompare →
Larger context, Higher qualityCompare →

Pixtral 12B

12B params · dense

Quality: 50

from $0.15/M

Larger contextCompare →

Frequently Asked Questions

How much VRAM does FLUX.2 need for inference?

FLUX.2 requires approximately 24.0 GB of VRAM at BF16 precision, 12.0 GB at FP8, or 6.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (196608 bytes per token) and activations (~1.00 GB).

What is the best GPU for FLUX.2?

The top recommended GPU for FLUX.2 is the A100 40GB SXM using BF16 precision. It achieves approximately 349.9 tokens/sec at an estimated cost of $807/month ($0.88/M tokens). Score: 95/100.

How much does FLUX.2 inference cost?

FLUX.2 inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.