Skip to content
Updated minutes ago
NVIDIA

NV EmbedQA E5 v5

NVIDIA · dense · 0.33B parameters · 512 context

Quality
50.0

Parameters

0.33B

Context Window

1K tokens

Architecture

Dense

Best GPU

B200 SXM

Cheapest API

$0.01/M

Intelligence Brief

NV EmbedQA E5 v5 is a 0.33B parameter DENSE model from NVIDIA, featuring Multi-Head Attention (MHA) with 24 layers and 1,024 hidden dimensions. With a 512 token context window, it supports multilingual. The most cost-effective API deployment is via nvidia-nim at $0.01/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.

Architecture Details

TypeDENSE
Total Parameters0.33B
Active Parameters0.33B
Layers24
Hidden Dimension1,024
Attention Heads16
KV Heads16
Head Dimension64
Vocab Size30,522

Memory Requirements

BF16 Weights

0.7 GB

FP8 Weights

0.3 GB

INT4 Weights

0.2 GB

KV-Cache per Token12288 bytes
Activation Estimate0.10 GB

GPU Compatibility Matrix

NV EmbedQA E5 v5 is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

83/100

score

Throughput

3.5K tok/s

Latency (ITL)

0.3ms

Est. TTFT

0ms

Cost/Month

$4261

Cost/M Tokens

$0.46

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

83/100

score

Throughput

3.5K tok/s

Latency (ITL)

0.3ms

Est. TTFT

0ms

Cost/Month

$4271

Cost/M Tokens

$0.46

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

83/100

score

Throughput

3.5K tok/s

Latency (ITL)

0.3ms

Est. TTFT

0ms

Cost/Month

$6169

Cost/M Tokens

$0.67

Use this config →

Deployment Options

API

API Deployment

nvidia-nim

$0.01/M

output tokens

Self-Hosted

Single GPU

B200 SXM

$4261/mo

Min VRAM: 0 GB

Scale

Multi-GPU

B200 SXM

3.5K tok/s

Best available config

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
nvidia-nim$0.01$0.01
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
nvidia-nimBest Value$0.01$0.01$0

Cost per 1,000 Requests

Short (500 tok)

$0.00

via nvidia-nim

Medium (2K tok)

$0.02

via nvidia-nim

Long (8K tok)

$0.06

via nvidia-nim

Performance Estimates

Throughput by GPU

B200 SXM
3.5K tok/s
B100 SXM
3.5K tok/s
GB200 NVL72 (per GPU)
3.5K tok/s

VRAM Breakdown (B200 SXM, FP8)

Weights
KV
Act
Weights 0.3 GBKV-Cache 0.8 GBActivations 0.8 GBOverhead 0.0 GB

Precision Impact

bf16

0.7 GB

weights/GPU

fp8

0.3 GB

weights/GPU

~3.5K tok/s

int4

0.2 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

tensorrt-llmvllm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy NV EmbedQA E5 v5

Similar Models

Similar specsCompare →
Similar specsCompare →
Larger context, Larger modelCompare →

Frequently Asked Questions

How much VRAM does NV EmbedQA E5 v5 need for inference?

NV EmbedQA E5 v5 requires approximately 0.7 GB of VRAM at BF16 precision, 0.3 GB at FP8, or 0.2 GB at INT4 quantization. Additional VRAM is needed for KV-cache (12288 bytes per token) and activations (~0.10 GB).

What is the best GPU for NV EmbedQA E5 v5?

The top recommended GPU for NV EmbedQA E5 v5 is the B200 SXM using FP8 precision. It achieves approximately 3.5K tokens/sec at an estimated cost of $4261/month ($0.46/M tokens). Score: 83/100.

How much does NV EmbedQA E5 v5 inference cost?

NV EmbedQA E5 v5 API inference starts from $0.01/M input tokens and $0.01/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.