Skip to content
Updated minutes ago
Cohere

Command A

Cohere · dense · 111B parameters · 256,000 context

Quality
81.0

Parameters

111B

Context Window

250K tokens

Architecture

Dense

Best GPU

H20

Cheapest API

$10.00/M

Quality Score

81/100

Intelligence Brief

Command A is a 111B parameter DENSE model from Cohere, featuring Grouped Query Attention (GQA) with 72 layers and 10,240 hidden dimensions. With a 256,000 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 83, HumanEval 55, GSM8K 88. The most cost-effective API deployment is via cohere at $10.00/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $3758/month.

Architecture Details

TypeDENSE
Total Parameters111B
Active Parameters111B
Layers72
Hidden Dimension10,240
Attention Heads80
KV Heads10
Head Dimension128
Vocab Size256,000

Memory Requirements

BF16 Weights

222.0 GB

FP8 Weights

111.0 GB

INT4 Weights

55.5 GB

KV-Cache per Token184320 bytes
Activation Estimate3.50 GB

GPU Compatibility Matrix

Command A is compatible with 21% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H20optimal

BF16 · 4 GPUs · tensorrt-llm

90/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$3758

Cost/M Tokens

$5.11

Use this config →
B200 NVL (pair)optimal

BF16 · 1 GPU · tensorrt-llm

88/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$9965

Cost/M Tokens

$13.54

Use this config →
B200 SXMoptimal

BF16 · 2 GPUs · tensorrt-llm

83/100

score

Throughput

280.0 tok/s

Latency (ITL)

3.6ms

Est. TTFT

1ms

Cost/Month

$8522

Cost/M Tokens

$11.58

Use this config →

Deployment Options

API

API Deployment

cohere

$10.00/M

output tokens

Self-Hosted

Single GPU

B200 NVL (pair)

$9965/mo

Min VRAM: 111 GB

Scale

Multi-GPU

H20 x4

280.0 tok/s

TP· $3758/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
cohere$2.50$10.00
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
cohereBest Value$2.50$10.00$63

Cost per 1,000 Requests

Short (500 tok)

$3.25

via cohere

Medium (2K tok)

$13.00

via cohere

Long (8K tok)

$40.00

via cohere

Performance Estimates

Throughput by GPU

H20
280.0 tok/s
B200 NVL (pair)
280.0 tok/s
B200 SXM
280.0 tok/s

VRAM Breakdown (H20, BF16)

Weights
Act
Weights 55.5 GBKV-Cache 6.0 GBActivations 28.0 GBOverhead 2.8 GB

Quality Benchmarks

Above Average
89th percentile across all models
MMLU
83.0
Average (68th pctile)
HumanEval
55.0
Average (57th pctile)
GSM8K
88.0
Average (62th pctile)
MT-Bench
84.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

Supported Precisions

BF16 (default)

Where to Deploy Command A

Similar Models

Llama 4 Scout

109B params · moe

Quality: 73

from $0.30/M

Larger context, Lower quality, CheaperCompare →

Command R+

104B params · dense

Quality: 68

from $2.00/M

Lower quality, CheaperCompare →

Yi-Large

102.6B params · moe

Quality: 74

from $3.00/M

Smaller context, Lower quality, CheaperCompare →

Inflection 3

100B params · dense

Quality: 74

from $15.00/M

Smaller context, Lower qualityCompare →

Frequently Asked Questions

How much VRAM does Command A need for inference?

Command A requires approximately 222.0 GB of VRAM at BF16 precision, 111.0 GB at FP8, or 55.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (184320 bytes per token) and activations (~3.50 GB).

What is the best GPU for Command A?

The top recommended GPU for Command A is the H20 (x4) using BF16 precision. It achieves approximately 280.0 tokens/sec at an estimated cost of $3758/month ($5.11/M tokens). Score: 90/100.

How much does Command A inference cost?

Command A API inference starts from $2.50/M input tokens and $10.00/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.