StarCoder2 15B
BigCode · dense · 15.5B parameters · 16,384 context
Parameters
15.5B
Context Window
16K tokens
Architecture
Dense
Best GPU
H100 SXM
Cheapest API
$0.30/M
Quality Score
42/100
Intelligence Brief
StarCoder2 15B is a 15.5B parameter DENSE model from BigCode, featuring Grouped Query Attention (GQA) with 40 layers and 6,144 hidden dimensions. With a 16,384 token context window, it supports code. On standardized benchmarks, it achieves MMLU 45, HumanEval 46, GSM8K 32. The most cost-effective API deployment is via huggingface at $0.30/M output tokens. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.
Architecture Details
Memory Requirements
BF16 Weights
31.0 GB
FP8 Weights
15.5 GB
INT4 Weights
7.8 GB
GPU Compatibility Matrix
StarCoder2 15B is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.65
FP8 · 1 GPU · tensorrt-llm
95/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.65
BF16 · 1 GPU · vllm
95/100
score
Throughput
133.8 tok/s
Latency (ITL)
7.5ms
Est. TTFT
1ms
Cost/Month
$465
Cost/M Tokens
$1.32
Deployment Options
API Deployment
huggingface
$0.30/M
output tokens
Single GPU
H100 SXM
$1794/mo
Min VRAM: 16 GB
Multi-GPU
RTX 3090 x2
272.1 tok/s
TP· $361/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| huggingface | $0.30 | $0.30 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| huggingfaceBest Value | $0.30 | $0.30 | $3 |
Cost per 1,000 Requests
Short (500 tok)
$0.21
via huggingface
Medium (2K tok)
$0.84
via huggingface
Long (8K tok)
$3.00
via huggingface
Performance Estimates
Throughput by GPU
VRAM Breakdown (H100 SXM, FP8)
Precision Impact
bf16
31.0 GB
weights/GPU
fp8
15.5 GB
weights/GPU
~1.1K tok/s
int4
7.8 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy StarCoder2 15B
Self-Hosted Infrastructure
Similar Models
StarCoder2 7B
6.73B params · dense
Quality: 35
from $0.15/M
OctoCoder 15B
15.5B params · dense
Quality: 50
DeepSeek V2 Lite
15.7B params · moe
Quality: 50
Nemotron 15B
15B params · dense
Quality: 72
from $0.30/M
CodeGen2 16B
16B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does StarCoder2 15B need for inference?
StarCoder2 15B requires approximately 31.0 GB of VRAM at BF16 precision, 15.5 GB at FP8, or 7.8 GB at INT4 quantization. Additional VRAM is needed for KV-cache (81920 bytes per token) and activations (~1.50 GB).
What is the best GPU for StarCoder2 15B?
The top recommended GPU for StarCoder2 15B is the H100 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $1794/month ($0.65/M tokens). Score: 100/100.
How much does StarCoder2 15B inference cost?
StarCoder2 15B API inference starts from $0.30/M input tokens and $0.30/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.