Falcon 40B
TII · dense · 40B parameters · 2,048 context
Parameters
40B
Context Window
2K tokens
Architecture
Dense
Best GPU
H100 SXM
Cheapest API
$0.80/M
Quality Score
48/100
Intelligence Brief
Falcon 40B is a 40B parameter DENSE model from TII, featuring Grouped Query Attention (GQA) with 60 layers and 8,192 hidden dimensions. With a 2,048 token context window, it supports code, multilingual. On standardized benchmarks, it achieves MMLU 55.4, HumanEval 26, GSM8K 42. The most cost-effective API deployment is via tii at $0.80/M output tokens. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.
Architecture Details
Memory Requirements
BF16 Weights
80.0 GB
FP8 Weights
40.0 GB
INT4 Weights
20.0 GB
GPU Compatibility Matrix
Falcon 40B is compatible with 52% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
665.6 tok/s
Latency (ITL)
1.5ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$1.03
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
397.4 tok/s
Latency (ITL)
2.5ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$1.72
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
794.7 tok/s
Latency (ITL)
1.3ms
Est. TTFT
0ms
Cost/Month
$940
Cost/M Tokens
$0.45
Deployment Options
API Deployment
tii
$0.80/M
output tokens
Single GPU
H100 SXM
$1794/mo
Min VRAM: 40 GB
Multi-GPU
A100 80GB SXM x2
258.1 tok/s
TP· $2259/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| tii | $0.80 | $0.80 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| tiiBest Value | $0.80 | $0.80 | $8 |
Cost per 1,000 Requests
Short (500 tok)
$0.56
via tii
Medium (2K tok)
$2.24
via tii
Long (8K tok)
$8.00
via tii
Performance Estimates
Throughput by GPU
VRAM Breakdown (H100 SXM, FP8)
Precision Impact
bf16
80.0 GB
weights/GPU
fp8
40.0 GB
weights/GPU
~665.6 tok/s
int4
20.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Falcon 40B
Self-Hosted Infrastructure
Similar Models
VILA 1.5 40B
40B params · dense
Quality: 73
from $1.00/M
Phi 3.5 MoE
41.9B params · moe
Quality: 74
Aya 23 35B
35B params · dense
Quality: 50
from $1.50/M
Command R
35B params · dense
Quality: 68
from $0.50/M
Command R (August 2024)
35B params · dense
Quality: 68
from $0.60/M
Frequently Asked Questions
How much VRAM does Falcon 40B need for inference?
Falcon 40B requires approximately 80.0 GB of VRAM at BF16 precision, 40.0 GB at FP8, or 20.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (122880 bytes per token) and activations (~2.00 GB).
What is the best GPU for Falcon 40B?
The top recommended GPU for Falcon 40B is the H100 SXM using FP8 precision. It achieves approximately 665.6 tokens/sec at an estimated cost of $1794/month ($1.03/M tokens). Score: 100/100.
How much does Falcon 40B inference cost?
Falcon 40B API inference starts from $0.80/M input tokens and $0.80/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.