Skip to content
Updated minutes ago
Meta

Llama 3.1 Nemotron 51B

NVIDIA · dense · 51B parameters · 131,072 context

Quality
78.0

Architecture Details

TypeDENSE
Total Parameters51B
Active Parameters51B
Layers64
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256

Memory Requirements

BF16 Weights

102.0 GB

FP8 Weights

51.0 GB

INT4 Weights

25.5 GB

KV-Cache per Token262144 bytes
Activation Estimate2.00 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM FP8H100 PCIe FP8H100 NVL FP8

GPU Recommendations

H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

549.5 tok/s

Cost/Month

$1794

Cost/M Tokens

$1.24

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

328.1 tok/s

Cost/Month

$1794

Cost/M Tokens

$2.08

Use this config →
H100 NVLoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Cost/Month

$2932

Cost/M Tokens

$1.99

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
nvidia-nim$0.40$0.40
Cheapest

Quality Benchmarks

MMLU
78.0
HumanEval
50.0
GSM8K
86.0
MT-Bench
82.0

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

tensorrt-llmvllmsglang

Supported Precisions

BF16 (default)FP8INT4

Similar Models