Skip to content
Updated minutes ago
TII

Falcon 40B

TII · dense · 40B parameters · 2,048 context

Quality
48.0

Parameters

40B

Context Window

2K tokens

Architecture

Dense

Best GPU

H100 SXM

Cheapest API

$0.80/M

Quality Score

48/100

Intelligence Brief

Falcon 40B is a 40B parameter DENSE model from TII, featuring Grouped Query Attention (GQA) with 60 layers and 8,192 hidden dimensions. With a 2,048 token context window, it supports code, multilingual. On standardized benchmarks, it achieves MMLU 55.4, HumanEval 26, GSM8K 42. The most cost-effective API deployment is via tii at $0.80/M output tokens. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.

Architecture Details

TypeDENSE
Total Parameters40B
Active Parameters40B
Layers60
Hidden Dimension8,192
Attention Heads128
KV Heads8
Head Dimension64
Vocab Size65,024

Memory Requirements

BF16 Weights

80.0 GB

FP8 Weights

40.0 GB

INT4 Weights

20.0 GB

KV-Cache per Token122880 bytes
Activation Estimate2.00 GB

GPU Compatibility Matrix

Falcon 40B is compatible with 52% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

665.6 tok/s

Latency (ITL)

1.5ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$1.03

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

397.4 tok/s

Latency (ITL)

2.5ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$1.72

Use this config →
H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

794.7 tok/s

Latency (ITL)

1.3ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.45

Use this config →

Deployment Options

API

API Deployment

tii

$0.80/M

output tokens

Self-Hosted

Single GPU

H100 SXM

$1794/mo

Min VRAM: 40 GB

Scale

Multi-GPU

A100 80GB SXM x2

258.1 tok/s

TP· $2259/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
tii$0.80$0.80
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
tiiBest Value$0.80$0.80$8

Cost per 1,000 Requests

Short (500 tok)

$0.56

via tii

Medium (2K tok)

$2.24

via tii

Long (8K tok)

$8.00

via tii

Performance Estimates

Throughput by GPU

H100 SXM
665.6 tok/s
H100 PCIe
397.4 tok/s
H20
794.7 tok/s

VRAM Breakdown (H100 SXM, FP8)

Weights
Act
Weights 40.0 GBKV-Cache 1.0 GBActivations 16.0 GBOverhead 2.0 GB

Precision Impact

bf16

80.0 GB

weights/GPU

fp8

40.0 GB

weights/GPU

~665.6 tok/s

int4

20.0 GB

weights/GPU

Quality Benchmarks

Bottom 25%
6th percentile across all models
MMLU
55.4
Bottom 25% (15th pctile)
HumanEval
26.0
Bottom 25% (7th pctile)
GSM8K
42.0
Bottom 25% (11th pctile)
MT-Bench
68.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Falcon 40B

Similar Models

VILA 1.5 40B

40B params · dense

Quality: 73

from $1.00/M

Larger context, Higher qualityCompare →
Larger context, Higher qualityCompare →

Aya 23 35B

35B params · dense

Quality: 50

from $1.50/M

Larger context, More expensiveCompare →

Command R

35B params · dense

Quality: 68

from $0.50/M

Larger context, Higher quality, CheaperCompare →
Larger context, Higher qualityCompare →

Frequently Asked Questions

How much VRAM does Falcon 40B need for inference?

Falcon 40B requires approximately 80.0 GB of VRAM at BF16 precision, 40.0 GB at FP8, or 20.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (122880 bytes per token) and activations (~2.00 GB).

What is the best GPU for Falcon 40B?

The top recommended GPU for Falcon 40B is the H100 SXM using FP8 precision. It achieves approximately 665.6 tokens/sec at an estimated cost of $1794/month ($1.03/M tokens). Score: 100/100.

How much does Falcon 40B inference cost?

Falcon 40B API inference starts from $0.80/M input tokens and $0.80/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.