Updated minutes ago
Jamba Instruct
AI21 · moe · 52B parameters · 256,000 context
Quality66.0
Architecture Details
TypeMOE
Total Parameters52B
Active Parameters12B
Layers32
Hidden Dimension4,096
Attention Heads32
KV Heads8
Head Dimension128
Vocab Size65,536
Total Experts16
Active Experts2
Memory Requirements
BF16 Weights
104.0 GB
FP8 Weights
52.0 GB
INT4 Weights
26.0 GB
KV-Cache per Token65536 bytes
Activation Estimate2.00 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM FP8H100 PCIe FP8H100 NVL FP8
GPU Recommendations
B200 SXMoptimal
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Cost/Month
$4261
Cost/M Tokens
$2.90
B100 SXMoptimal
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Cost/Month
$4271
Cost/M Tokens
$2.90
GB200 NVL72 (per GPU)optimal
BF16 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Cost/Month
$6169
Cost/M Tokens
$4.19
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| ai21 | $0.50 | $0.70 | Cheapest |
Quality Benchmarks
MMLU72.0
HumanEval42.0
GSM8K68.0
MT-Bench75.0
Capabilities
Features
✓ Tool Use✗ Vision✓ Code✓ Math✗ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
Supported Precisions
BF16 (default)