Mistral Nemo 12B
Mistral AI · dense · 12B parameters · 131,072 context
Parameters
12B
Context Window
128K tokens
Architecture
Dense
Best GPU
A100 40GB SXM
Cheapest API
$0.13/M
Quality Score
62/100
Intelligence Brief
Mistral Nemo 12B is a 12B parameter DENSE model from Mistral AI, featuring Grouped Query Attention (GQA) with 40 layers and 5,120 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 68, HumanEval 38, GSM8K 65. The most cost-effective API deployment is via deepinfra at $0.13/M output tokens. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.
Architecture Details
Memory Requirements
BF16 Weights
24.0 GB
FP8 Weights
12.0 GB
INT4 Weights
6.0 GB
GPU Compatibility Matrix
Mistral Nemo 12B is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
95/100
score
Throughput
349.9 tok/s
Latency (ITL)
2.9ms
Est. TTFT
0ms
Cost/Month
$807
Cost/M Tokens
$0.88
BF16 · 1 GPU · vllm
95/100
score
Throughput
172.8 tok/s
Latency (ITL)
5.8ms
Est. TTFT
1ms
Cost/Month
$465
Cost/M Tokens
$1.02
BF16 · 1 GPU · vllm
95/100
score
Throughput
156.6 tok/s
Latency (ITL)
6.4ms
Est. TTFT
1ms
Cost/Month
$399
Cost/M Tokens
$0.97
Deployment Options
API Deployment
deepinfra
$0.13/M
output tokens
Single GPU
A100 40GB SXM
$807/mo
Min VRAM: 12 GB
Multi-GPU
RTX 3090 x2
343.7 tok/s
TP· $361/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepinfra | $0.13 | $0.13 | Cheapest |
| together | $0.18 | $0.18 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| deepinfraBest Value | $0.13 | $0.13 | $1 |
| together | $0.18 | $0.18 | $2 |
Cost per 1,000 Requests
Short (500 tok)
$0.09
via deepinfra
Medium (2K tok)
$0.36
via deepinfra
Long (8K tok)
$1.30
via deepinfra
Performance Estimates
Throughput by GPU
VRAM Breakdown (A100 40GB SXM, BF16)
Precision Impact
bf16
24.0 GB
weights/GPU
~349.9 tok/s
fp8
12.0 GB
weights/GPU
int4
6.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Mistral Nemo 12B
Similar Models
Amazon Nova Lite
12B params · dense
Quality: 50
from $0.24/M
Gemma 3 12B
12B params · dense
Quality: 71
from $0.10/M
Pixtral 12B
12B params · dense
Quality: 50
from $0.15/M
FLUX.1 Dev
12B params · dense
Quality: 50
from $25.00/M
FLUX.2
12B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does Mistral Nemo 12B need for inference?
Mistral Nemo 12B requires approximately 24.0 GB of VRAM at BF16 precision, 12.0 GB at FP8, or 6.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (163840 bytes per token) and activations (~1.50 GB).
What is the best GPU for Mistral Nemo 12B?
The top recommended GPU for Mistral Nemo 12B is the A100 40GB SXM using BF16 precision. It achieves approximately 349.9 tokens/sec at an estimated cost of $807/month ($0.88/M tokens). Score: 95/100.
How much does Mistral Nemo 12B inference cost?
Mistral Nemo 12B API inference starts from $0.13/M input tokens and $0.13/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.