Skip to content
Updated minutes ago
NVIDIA

Nemotron-3 Super 120B

NVIDIA · dense · 120B parameters · 131,072 context

Quality
84.0

Parameters

120B

Context Window

128K tokens

Architecture

Dense

Best GPU

B200 SXM

Cheapest API

$2.40/M

Quality Score

84/100

Intelligence Brief

Nemotron-3 Super 120B is a 120B parameter DENSE model from NVIDIA, featuring Grouped Query Attention (GQA) with 80 layers and 8,192 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 85, HumanEval 70, GSM8K 90. The most cost-effective API deployment is via nvidia at $2.40/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.

Architecture Details

TypeDENSE
Total Parameters120B
Active Parameters120B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256

Memory Requirements

BF16 Weights

240.0 GB

FP8 Weights

120.0 GB

INT4 Weights

60.0 GB

KV-Cache per Token327680 bytes
Activation Estimate3.50 GB

GPU Compatibility Matrix

Nemotron-3 Super 120B is compatible with 21% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$4261

Cost/M Tokens

$5.79

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$4271

Cost/M Tokens

$5.80

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$6169

Cost/M Tokens

$8.38

Use this config →

Deployment Options

API

API Deployment

nvidia

$2.40/M

output tokens

Self-Hosted

Single GPU

B200 SXM

$4261/mo

Min VRAM: 120 GB

Scale

Multi-GPU

H20 x2

280.0 tok/s

TP· $1879/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
nvidia$0.80$2.40
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
nvidiaBest Value$0.80$2.40$16

Cost per 1,000 Requests

Short (500 tok)

$0.88

via nvidia

Medium (2K tok)

$3.52

via nvidia

Long (8K tok)

$11.20

via nvidia

Performance Estimates

Throughput by GPU

B200 SXM
280.0 tok/s
B100 SXM
280.0 tok/s
GB200 NVL72 (per GPU)
280.0 tok/s

VRAM Breakdown (B200 SXM, FP8)

Weights
Act
Weights 120.0 GBKV-Cache 2.7 GBActivations 28.0 GBOverhead 6.0 GB

Precision Impact

bf16

240.0 GB

weights/GPU

fp8

120.0 GB

weights/GPU

~280.0 tok/s

int4

60.0 GB

weights/GPU

Quality Benchmarks

Top 10%
93th percentile across all models
MMLU
85.0
Average (74th pctile)
HumanEval
70.0
Above Average (84th pctile)
GSM8K
90.0
Average (68th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Nemotron-3 Super 120B

Similar Models

Nemotron 70B

70.6B params · dense

Quality: 83

from $0.88/M

More expensive, Larger modelCompare →

Nemotron 340B

340B params · dense

Quality: 85

from $4.20/M

More expensive, Larger modelCompare →
Lower quality, More expensiveCompare →
Lower qualityCompare →

Frequently Asked Questions

How much VRAM does Nemotron-3 Super 120B need for inference?

Nemotron-3 Super 120B requires approximately 240.0 GB of VRAM at BF16 precision, 120.0 GB at FP8, or 60.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (327680 bytes per token) and activations (~3.50 GB).

What is the best GPU for Nemotron-3 Super 120B?

The top recommended GPU for Nemotron-3 Super 120B is the B200 SXM using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $4261/month ($5.79/M tokens). Score: 100/100.

How much does Nemotron-3 Super 120B inference cost?

Nemotron-3 Super 120B API inference starts from $0.80/M input tokens and $2.40/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.