Skip to content
Updated minutes ago
Meta

Llama 3.1 Nemotron 70B Reward

NVIDIA · dense · 70.6B parameters · 131,072 context

Quality
80.0

Architecture Details

TypeDENSE
Total Parameters70.6B
Active Parameters70.6B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256

Memory Requirements

BF16 Weights

141.2 GB

FP8 Weights

70.6 GB

INT4 Weights

35.3 GB

KV-Cache per Token327680 bytes
Activation Estimate2.50 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM FP8H100 SXM INT4H100 PCIe INT4H100 NVL FP8

GPU Recommendations

H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Cost/Month

$2553

Cost/M Tokens

$1.73

Use this config →
H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

474.0 tok/s

Cost/Month

$940

Cost/M Tokens

$0.75

Use this config →
GH200optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

474.0 tok/s

Cost/Month

$2838

Cost/M Tokens

$2.28

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
nvidia-nim$0.50$0.50
Cheapest

Quality Benchmarks

MMLU
80.0
HumanEval
52.0
GSM8K
88.0
MT-Bench
83.0

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

tensorrt-llmvllmsglang

Supported Precisions

BF16 (default)FP8INT4

Similar Models