Skip to content
Updated minutes ago
NVIDIA

NVLM-D 72B

NVIDIA · dense · 72B parameters · 32,768 context

Quality
79.0

Parameters

72B

Context Window

32K tokens

Architecture

Dense

Best GPU

H200 SXM

Quality Score

79/100

Intelligence Brief

NVLM-D 72B is a 72B parameter DENSE model from NVIDIA, featuring Grouped Query Attention (GQA) with 80 layers and 8,192 hidden dimensions. With a 32,768 token context window, it supports tools, vision, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 82, HumanEval 65. For self-hosted inference, H200 SXM delivers optimal throughput at $2553/month.

Architecture Details

TypeDENSE
Total Parameters72B
Active Parameters72B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size152,064

Memory Requirements

BF16 Weights

144.0 GB

FP8 Weights

72.0 GB

INT4 Weights

36.0 GB

KV-Cache per Token327680 bytes
Activation Estimate2.50 GB

GPU Compatibility Matrix

NVLM-D 72B is compatible with 37% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

557.7 tok/s

Latency (ITL)

1.8ms

Est. TTFT

0ms

Cost/Month

$2553

Cost/M Tokens

$1.74

Use this config →
B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

98/100

score

Throughput

560.0 tok/s

Latency (ITL)

1.8ms

Est. TTFT

0ms

Cost/Month

$4261

Cost/M Tokens

$2.90

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

98/100

score

Throughput

560.0 tok/s

Latency (ITL)

1.8ms

Est. TTFT

0ms

Cost/Month

$4271

Cost/M Tokens

$2.90

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

H200 SXM

$2553/mo

Min VRAM: 72 GB

Scale

Multi-GPU

H100 SXM x2

560.0 tok/s

TP· $3587/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

H200 SXM
557.7 tok/s
B200 SXM
560.0 tok/s
B100 SXM
560.0 tok/s

VRAM Breakdown (H200 SXM, FP8)

Weights
Act
Weights 72.0 GBKV-Cache 2.7 GBActivations 20.0 GBOverhead 3.6 GB

Precision Impact

bf16

144.0 GB

weights/GPU

fp8

72.0 GB

weights/GPU

~557.7 tok/s

int4

36.0 GB

weights/GPU

Quality Benchmarks

Above Average
86th percentile across all models
MMLU
82.0
Average (61th pctile)
HumanEval
65.0
Above Average (77th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy NVLM-D 72B

Similar Models

Smaller contextCompare →

Qwen 2.5 72B

72.7B params · dense

Quality: 77

from $0.90/M

Larger contextCompare →
Smaller context, Lower qualityCompare →
Larger context, Lower qualityCompare →

Frequently Asked Questions

How much VRAM does NVLM-D 72B need for inference?

NVLM-D 72B requires approximately 144.0 GB of VRAM at BF16 precision, 72.0 GB at FP8, or 36.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (327680 bytes per token) and activations (~2.50 GB).

What is the best GPU for NVLM-D 72B?

The top recommended GPU for NVLM-D 72B is the H200 SXM using FP8 precision. It achieves approximately 557.7 tokens/sec at an estimated cost of $2553/month ($1.74/M tokens). Score: 100/100.

How much does NVLM-D 72B inference cost?

NVLM-D 72B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.