NV Embed v2
NVIDIA · dense · 7.85B parameters · 32,768 context
Parameters
7.85B
Context Window
32K tokens
Architecture
Dense
Best GPU
A30
Cheapest API
$0.01/M
Intelligence Brief
NV Embed v2 is a 7.85B parameter DENSE model from NVIDIA, featuring Grouped Query Attention (GQA) with 32 layers and 4,096 hidden dimensions. With a 32,768 token context window, it supports multilingual. The most cost-effective API deployment is via nvidia at $0.01/M output tokens. For self-hosted inference, A30 delivers optimal throughput at $332/month.
Architecture Details
Memory Requirements
BF16 Weights
15.7 GB
FP8 Weights
7.8 GB
INT4 Weights
3.9 GB
GPU Compatibility Matrix
NV Embed v2 is compatible with 95% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
320.9 tok/s
Latency (ITL)
3.1ms
Est. TTFT
1ms
Cost/Month
$332
Cost/M Tokens
$0.39
BF16 · 1 GPU · vllm
100/100
score
Throughput
346.7 tok/s
Latency (ITL)
2.9ms
Est. TTFT
0ms
Cost/Month
$370
Cost/M Tokens
$0.41
BF16 · 1 GPU · vllm
100/100
score
Throughput
321.9 tok/s
Latency (ITL)
3.1ms
Est. TTFT
1ms
Cost/Month
$180
Cost/M Tokens
$0.21
Deployment Options
API Deployment
nvidia
$0.01/M
output tokens
Single GPU
A30
$332/mo
Min VRAM: 8 GB
Multi-GPU
A4000 x2
241.0 tok/s
TP· $323/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| nvidia | $0.01 | $0.01 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| nvidiaBest Value | $0.01 | $0.01 | $0 |
Cost per 1,000 Requests
Short (500 tok)
$0.01
via nvidia
Medium (2K tok)
$0.03
via nvidia
Long (8K tok)
$0.12
via nvidia
Performance Estimates
Throughput by GPU
VRAM Breakdown (A30, BF16)
Precision Impact
bf16
15.7 GB
weights/GPU
~320.9 tok/s
fp8
7.8 GB
weights/GPU
int4
3.9 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy NV Embed v2
Self-Hosted Infrastructure
Similar Models
InternLM 2.5 7B
7.74B params · dense
Quality: 50
from $0.20/M
Aya 23 8B
8B params · dense
Quality: 50
from $0.60/M
DeepSeek R1 Distill 8B
8B params · dense
Quality: 88
from $0.20/M
Llama 3 8B
8B params · dense
Quality: 63
from $0.20/M
Llama Guard 3 8B
8B params · dense
Quality: 50
from $0.20/M
Frequently Asked Questions
How much VRAM does NV Embed v2 need for inference?
NV Embed v2 requires approximately 15.7 GB of VRAM at BF16 precision, 7.8 GB at FP8, or 3.9 GB at INT4 quantization. Additional VRAM is needed for KV-cache (131072 bytes per token) and activations (~0.80 GB).
What is the best GPU for NV Embed v2?
The top recommended GPU for NV Embed v2 is the A30 using BF16 precision. It achieves approximately 320.9 tokens/sec at an estimated cost of $332/month ($0.39/M tokens). Score: 100/100.
How much does NV Embed v2 inference cost?
NV Embed v2 API inference starts from $0.01/M input tokens and $0.01/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.