Jamba Instruct
AI21 · moe · 52B parameters · 256,000 context
Parameters
52B
Context Window
250K tokens
Architecture
MoE
Best GPU
B200 SXM
Cheapest API
$0.70/M
Quality Score
66/100
Intelligence Brief
Jamba Instruct is a 52B parameter Mixture-of-Experts (16 experts, 2 active) model from AI21, featuring Grouped Query Attention (GQA) with 32 layers and 4,096 hidden dimensions. With a 256,000 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 72, HumanEval 42, GSM8K 68. The most cost-effective API deployment is via ai21 at $0.70/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.
Architecture Details
Memory Requirements
BF16 Weights
104.0 GB
FP8 Weights
52.0 GB
INT4 Weights
26.0 GB
GPU Compatibility Matrix
Jamba Instruct is compatible with 40% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Latency (ITL)
1.8ms
Est. TTFT
0ms
Cost/Month
$4261
Cost/M Tokens
$2.90
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Latency (ITL)
1.8ms
Est. TTFT
0ms
Cost/Month
$4271
Cost/M Tokens
$2.90
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Latency (ITL)
1.8ms
Est. TTFT
0ms
Cost/Month
$6169
Cost/M Tokens
$4.19
Deployment Options
API Deployment
ai21
$0.70/M
output tokens
Single GPU
B200 SXM
$4261/mo
Min VRAM: 52 GB
Multi-GPU
H20 x2
560.0 tok/s
TP· $1879/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| ai21 | $0.50 | $0.70 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| ai21Best Value | $0.50 | $0.70 | $6 |
Cost per 1,000 Requests
Short (500 tok)
$0.39
via ai21
Medium (2K tok)
$1.56
via ai21
Long (8K tok)
$5.40
via ai21
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, BF16)
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Jamba Instruct
Self-Hosted Infrastructure
Similar Models
Jamba 1.5 Mini
52B params · hybrid
Quality: 50
from $0.40/M
Llama 3.1 Nemotron 51B
51B params · dense
Quality: 78
from $0.40/M
Amazon Nova Pro
50B params · dense
Quality: 50
from $3.20/M
Gemini 2.0 Flash
50B params · moe
Quality: 80
from $0.40/M
Gemini 1.5 Flash
50B params · moe
Quality: 75
from $0.30/M
Frequently Asked Questions
How much VRAM does Jamba Instruct need for inference?
Jamba Instruct requires approximately 104.0 GB of VRAM at BF16 precision, 52.0 GB at FP8, or 26.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (65536 bytes per token) and activations (~2.00 GB).
What is the best GPU for Jamba Instruct?
The top recommended GPU for Jamba Instruct is the B200 SXM using BF16 precision. It achieves approximately 560.0 tokens/sec at an estimated cost of $4261/month ($2.90/M tokens). Score: 100/100.
How much does Jamba Instruct inference cost?
Jamba Instruct API inference starts from $0.50/M input tokens and $0.70/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.