StableLM 2 12B
Stability AI · dense · 12.1B parameters · 4,096 context
Parameters
12.1B
Context Window
4K tokens
Architecture
Dense
Best GPU
A100 40GB SXM
Cheapest API
$0.25/M
Intelligence Brief
StableLM 2 12B is a 12.1B parameter DENSE model from Stability AI, featuring Grouped Query Attention (GQA) with 40 layers and 5,120 hidden dimensions. With a 4,096 token context window, it supports code, multilingual. The most cost-effective API deployment is via stabilityai at $0.25/M output tokens. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.
Architecture Details
Memory Requirements
BF16 Weights
24.2 GB
FP8 Weights
12.1 GB
INT4 Weights
6.0 GB
GPU Compatibility Matrix
StableLM 2 12B is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
95/100
score
Throughput
312.3 tok/s
Latency (ITL)
3.2ms
Est. TTFT
1ms
Cost/Month
$807
Cost/M Tokens
$0.98
BF16 · 1 GPU · vllm
95/100
score
Throughput
154.2 tok/s
Latency (ITL)
6.5ms
Est. TTFT
1ms
Cost/Month
$465
Cost/M Tokens
$1.15
BF16 · 1 GPU · vllm
95/100
score
Throughput
139.8 tok/s
Latency (ITL)
7.2ms
Est. TTFT
1ms
Cost/Month
$399
Cost/M Tokens
$1.09
Deployment Options
API Deployment
stabilityai
$0.25/M
output tokens
Single GPU
A100 40GB SXM
$807/mo
Min VRAM: 12 GB
Multi-GPU
RTX 3090 x2
307.0 tok/s
TP· $361/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| stabilityai | $0.25 | $0.25 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| stabilityaiBest Value | $0.25 | $0.25 | $3 |
Cost per 1,000 Requests
Short (500 tok)
$0.17
via stabilityai
Medium (2K tok)
$0.70
via stabilityai
Long (8K tok)
$2.50
via stabilityai
Performance Estimates
Throughput by GPU
VRAM Breakdown (A100 40GB SXM, BF16)
Precision Impact
bf16
24.2 GB
weights/GPU
~312.3 tok/s
fp8
12.1 GB
weights/GPU
int4
6.0 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy StableLM 2 12B
Similar Models
Amazon Nova Lite
12B params · dense
Quality: 50
from $0.24/M
Gemma 3 12B
12B params · dense
Quality: 71
from $0.10/M
Mistral Nemo 12B
12B params · dense
Quality: 62
from $0.13/M
Pixtral 12B
12B params · dense
Quality: 50
from $0.15/M
FLUX.1 Dev
12B params · dense
Quality: 50
from $25.00/M
Frequently Asked Questions
How much VRAM does StableLM 2 12B need for inference?
StableLM 2 12B requires approximately 24.2 GB of VRAM at BF16 precision, 12.1 GB at FP8, or 6.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (204800 bytes per token) and activations (~1.00 GB).
What is the best GPU for StableLM 2 12B?
The top recommended GPU for StableLM 2 12B is the A100 40GB SXM using BF16 precision. It achieves approximately 312.3 tokens/sec at an estimated cost of $807/month ($0.98/M tokens). Score: 95/100.
How much does StableLM 2 12B inference cost?
StableLM 2 12B API inference starts from $0.25/M input tokens and $0.25/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.