Skip to content
Updated minutes ago
nvidia

L40

nvidia · ada · 48 GB GDDR6 · 300W TDP

VRAM

48 GB

BF16 TFLOPS

362

Bandwidth

864 GB/s

From

$0.75/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM48 GB GDDR6
Memory Bandwidth864 GB/s
BF16 TFLOPS362
FP16 TFLOPS362
FP8 TFLOPS733
INT8 TOPS733
TDP300W
InterconnectPCIE
Max per Node8
PCIe Gen4
CUDA Compute Capability8.9
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
tensordock$0.99/hr-$0.75/hrCheapest
vast_ai$1.09/hr-$0.79/hr
coreweave$1.58/hr$1.14/hr-
runpod$1.59/hr-$1.19/hr

Compatible Models (267)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model3 GPUs1 GPU
13B model6 GPUs1 GPU
70B model28 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.39 t/s/WFP8
Qwen 2.5 7B
0.38 t/s/WFP8
Llama 3.1 8B
0.36 t/s/WFP8
Llama 3.1 70B
0.04 t/s/WFP8
Qwen 2.5 72B
0.04 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
L40S48 GB362864$0.85/hr
RTX 6000 Ada48 GB91.1960$0.59/hr
L2048 GB239864$0.80/hr
L424 GB121300$0.29/hr
RTX 409024 GB1651008$0.39/hr

Methodology Note

Performance estimates for the L40are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (48 GB GDDR6 available), KV-cache allocation, and activation memory. Throughput predictions use the L40's rated 864 GB/s memory bandwidth and 362 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (ada). See our full methodology.

Frequently Asked Questions

How many AI models can run on L40?

The L40 can run 267 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x L40.

What is the L40 inference throughput?

The L40 delivers 362 BF16 TFLOPS and 733 FP8 TFLOPS with 864 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does L40 cost per hour?

The L40 is available starting from $0.75/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.