Skip to content
Updated minutes ago
BigCode

StarCoder2 15B

BigCode · dense · 15.5B parameters · 16,384 context

Quality
42.0

Parameters

15.5B

Context Window

16K tokens

Architecture

Dense

Best GPU

H100 SXM

Cheapest API

$0.30/M

Quality Score

42/100

Intelligence Brief

StarCoder2 15B is a 15.5B parameter DENSE model from BigCode, featuring Grouped Query Attention (GQA) with 40 layers and 6,144 hidden dimensions. With a 16,384 token context window, it supports code. On standardized benchmarks, it achieves MMLU 45, HumanEval 46, GSM8K 32. The most cost-effective API deployment is via huggingface at $0.30/M output tokens. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.

Architecture Details

TypeDENSE
Total Parameters15.5B
Active Parameters15.5B
Layers40
Hidden Dimension6,144
Attention Heads48
KV Heads4
Head Dimension128
Vocab Size49,152

Memory Requirements

BF16 Weights

31.0 GB

FP8 Weights

15.5 GB

INT4 Weights

7.8 GB

KV-Cache per Token81920 bytes
Activation Estimate1.50 GB

GPU Compatibility Matrix

StarCoder2 15B is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
RTX A6000optimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

133.8 tok/s

Latency (ITL)

7.5ms

Est. TTFT

1ms

Cost/Month

$465

Cost/M Tokens

$1.32

Use this config →

Deployment Options

API

API Deployment

huggingface

$0.30/M

output tokens

Self-Hosted

Single GPU

H100 SXM

$1794/mo

Min VRAM: 16 GB

Scale

Multi-GPU

RTX 3090 x2

272.1 tok/s

TP· $361/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
huggingface$0.30$0.30
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
huggingfaceBest Value$0.30$0.30$3

Cost per 1,000 Requests

Short (500 tok)

$0.21

via huggingface

Medium (2K tok)

$0.84

via huggingface

Long (8K tok)

$3.00

via huggingface

Performance Estimates

Throughput by GPU

H100 SXM
1.1K tok/s
H100 PCIe
1.1K tok/s
RTX A6000
133.8 tok/s

VRAM Breakdown (H100 SXM, FP8)

Weights
Act
Weights 15.5 GBKV-Cache 0.7 GBActivations 12.0 GBOverhead 0.8 GB

Precision Impact

bf16

31.0 GB

weights/GPU

fp8

15.5 GB

weights/GPU

~1.1K tok/s

int4

7.8 GB

weights/GPU

Quality Benchmarks

Bottom 25%
4th percentile across all models
MMLU
45.0
Bottom 25% (6th pctile)
HumanEval
46.0
Below Average (40th pctile)
GSM8K
32.0
Bottom 25% (5th pctile)
MT-Bench
58.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy StarCoder2 15B

Similar Models

StarCoder2 7B

6.73B params · dense

Quality: 35

from $0.15/M

Lower quality, Cheaper, Smaller modelCompare →
Larger context, Higher qualityCompare →

Nemotron 15B

15B params · dense

Quality: 72

from $0.30/M

Smaller context, Higher qualityCompare →
Smaller context, Higher qualityCompare →

Frequently Asked Questions

How much VRAM does StarCoder2 15B need for inference?

StarCoder2 15B requires approximately 31.0 GB of VRAM at BF16 precision, 15.5 GB at FP8, or 7.8 GB at INT4 quantization. Additional VRAM is needed for KV-cache (81920 bytes per token) and activations (~1.50 GB).

What is the best GPU for StarCoder2 15B?

The top recommended GPU for StarCoder2 15B is the H100 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $1794/month ($0.65/M tokens). Score: 100/100.

How much does StarCoder2 15B inference cost?

StarCoder2 15B API inference starts from $0.30/M input tokens and $0.30/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.