Nemotron 340B
NVIDIA · dense · 340B parameters · 131,072 context
Parameters
340B
Context Window
128K tokens
Architecture
Dense
Best GPU
B200 NVL (pair)
Cheapest API
$4.20/M
Quality Score
85/100
Intelligence Brief
Nemotron 340B is a 340B parameter DENSE model from NVIDIA, featuring Grouped Query Attention (GQA) with 96 layers and 18,432 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 82, HumanEval 57, GSM8K 92. The most cost-effective API deployment is via nvidia at $4.20/M output tokens. For self-hosted inference, B200 NVL (pair) delivers optimal throughput at $19929/month.
Architecture Details
Memory Requirements
BF16 Weights
680.0 GB
FP8 Weights
340.0 GB
INT4 Weights
170.0 GB
Fits on (single GPU) — most practical first
Fits on (multi-GPU with Tensor Parallelism)
Multi-GPU configurations use Tensor Parallelism (TP) to split model layers across GPUs. Requires NVLink or NVSwitch interconnect for optimal performance.
GPU Compatibility Matrix
Nemotron 340B is compatible with 7% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 2 GPUs · tensorrt-llm
88/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$19929
Cost/M Tokens
$27.08
FP8 · 4 GPUs · tensorrt-llm
83/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$17044
Cost/M Tokens
$23.16
FP8 · 4 GPUs · tensorrt-llm
80/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$10211
Cost/M Tokens
$13.88
Deployment Options
API Deployment
nvidia
$4.20/M
output tokens
Single GPU
Requires multi-GPU setup (340 GB VRAM needed)
Multi-GPU
B200 NVL (pair) x2
280.0 tok/s
TP· $19929/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| nvidia | $4.20 | $4.20 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| nvidiaBest Value | $4.20 | $4.20 | $42 |
Cost per 1,000 Requests
Short (500 tok)
$2.94
via nvidia
Medium (2K tok)
$11.76
via nvidia
Long (8K tok)
$42.00
via nvidia
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 NVL (pair), FP8)
Precision Impact
bf16
340.0 GB
weights/GPU
fp8
170.0 GB
weights/GPU
~280.0 tok/s
int4
85.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Nemotron 340B
Similar Models
Nemotron Ultra 253B
253B params · dense
Quality: 86
from $6.00/M
Nemotron-3 Super 120B
120B params · dense
Quality: 84
from $2.40/M
Grok-2
314B params · moe
Quality: 78
from $10.00/M
Grok-3
314B params · dense
Quality: 91
from $15.00/M
Snowflake Arctic 128x3B
395B params · moe
Quality: 50
Frequently Asked Questions
How much VRAM does Nemotron 340B need for inference?
Nemotron 340B requires approximately 680.0 GB of VRAM at BF16 precision, 340.0 GB at FP8, or 170.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (2359296 bytes per token) and activations (~8.00 GB).
What is the best GPU for Nemotron 340B?
The top recommended GPU for Nemotron 340B is the B200 NVL (pair) (x2) using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $19929/month ($27.08/M tokens). Score: 88/100.
How much does Nemotron 340B inference cost?
Nemotron 340B API inference starts from $4.20/M input tokens and $4.20/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.