Command R+
Cohere · dense · 104B parameters · 131,072 context
Parameters
104B
Context Window
128K tokens
Architecture
Dense
Best GPU
B200 SXM
Cheapest API
$2.00/M
Quality Score
68/100
Intelligence Brief
Command R+ is a 104B parameter DENSE model from Cohere, featuring Multi-Head Attention (MHA) with 64 layers and 12,288 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, multilingual. On standardized benchmarks, it achieves MMLU 80, HumanEval 50, GSM8K 88. The most cost-effective API deployment is via together at $2.00/M output tokens. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.
Architecture Details
Memory Requirements
BF16 Weights
208.0 GB
FP8 Weights
104.0 GB
INT4 Weights
52.0 GB
GPU Compatibility Matrix
Command R+ is compatible with 21% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$4261
Cost/M Tokens
$5.79
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$4271
Cost/M Tokens
$5.80
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
280.0 tok/s
Latency (ITL)
3.6ms
Est. TTFT
1ms
Cost/Month
$6169
Cost/M Tokens
$8.38
Deployment Options
API Deployment
together
$2.00/M
output tokens
Single GPU
B200 SXM
$4261/mo
Min VRAM: 104 GB
Multi-GPU
H100 SXM x2
280.0 tok/s
TP· $3587/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $2.00 | $2.00 | Cheapest |
| cohere | $2.50 | $10.00 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $2.00 | $2.00 | $20 |
| cohere | $2.50 | $10.00 | $63 |
Cost per 1,000 Requests
Short (500 tok)
$1.40
via together
Medium (2K tok)
$5.60
via together
Long (8K tok)
$20.00
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
208.0 GB
weights/GPU
fp8
104.0 GB
weights/GPU
~280.0 tok/s
int4
52.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Command R+
Self-Hosted Infrastructure
Similar Models
Command R
35B params · dense
Quality: 68
from $0.50/M
Command R (August 2024)
35B params · dense
Quality: 68
from $0.60/M
Yi-Large
102.6B params · moe
Quality: 74
from $3.00/M
Inflection 3
100B params · dense
Quality: 74
from $15.00/M
YaLM 100B
100B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does Command R+ need for inference?
Command R+ requires approximately 208.0 GB of VRAM at BF16 precision, 104.0 GB at FP8, or 52.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (3145728 bytes per token) and activations (~3.00 GB).
What is the best GPU for Command R+?
The top recommended GPU for Command R+ is the B200 SXM using FP8 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $4261/month ($5.79/M tokens). Score: 100/100.
How much does Command R+ inference cost?
Command R+ API inference starts from $2.00/M input tokens and $2.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.