Updated minutes ago
Llama 3.1 Nemotron 51B
NVIDIA · dense · 51B parameters · 131,072 context
Quality78.0
Architecture Details
TypeDENSE
Total Parameters51B
Active Parameters51B
Layers64
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256
Memory Requirements
BF16 Weights
102.0 GB
FP8 Weights
51.0 GB
INT4 Weights
25.5 GB
KV-Cache per Token262144 bytes
Activation Estimate2.00 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM FP8H100 PCIe FP8H100 NVL FP8
GPU Recommendations
H100 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
549.5 tok/s
Cost/Month
$1794
Cost/M Tokens
$1.24
H100 PCIeoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
328.1 tok/s
Cost/Month
$1794
Cost/M Tokens
$2.08
H100 NVLoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Cost/Month
$2932
Cost/M Tokens
$1.99
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| nvidia-nim | $0.40 | $0.40 | Cheapest |
Quality Benchmarks
MMLU78.0
HumanEval50.0
GSM8K86.0
MT-Bench82.0
Capabilities
Features
✓ Tool Use✗ Vision✓ Code✓ Math✓ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
tensorrt-llmvllmsglang
Supported Precisions
BF16 (default)FP8INT4
Similar Models
Llama 3.1 70B
70.6B params · dense
Quality: 82
from $0.79/M
HelpSteer2 Llama 3.1 70B
70.6B params · dense
Quality: 82
from $0.50/M
Llama 3.1 Nemotron 70B Instruct
70.6B params · dense
Quality: 83
from $0.88/M
Llama 3.1 Nemotron 70B Reward
70.6B params · dense
Quality: 80
from $0.50/M
Llama 3.1 70B Turbo
70.6B params · dense
Quality: 50
from $0.88/M