Gemma 2 2B
Google · dense · 2.6B parameters · 8,192 context
Parameters
2.6B
Context Window
8K tokens
Architecture
Dense
Best GPU
RTX 3080
Quality Score
44/100
Intelligence Brief
Gemma 2 2B is a 2.6B parameter DENSE model from Google, featuring Grouped Query Attention (GQA) with 26 layers and 2,304 hidden dimensions. With a 8,192 token context window, it supports structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 52.2, HumanEval 25, GSM8K 48. For self-hosted inference, RTX 3080 delivers optimal throughput at $133/month.
Architecture Details
Memory Requirements
BF16 Weights
5.2 GB
FP8 Weights
2.6 GB
INT4 Weights
1.3 GB
GPU Compatibility Matrix
Gemma 2 2B is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
710.3 tok/s
Latency (ITL)
1.4ms
Est. TTFT
0ms
Cost/Month
$133
Cost/M Tokens
$0.07
BF16 · 1 GPU · vllm
100/100
score
Throughput
254.2 tok/s
Latency (ITL)
3.9ms
Est. TTFT
1ms
Cost/Month
$209
Cost/M Tokens
$0.31
BF16 · 1 GPU · vllm
100/100
score
Throughput
418.7 tok/s
Latency (ITL)
2.4ms
Est. TTFT
0ms
Cost/Month
$85
Cost/M Tokens
$0.08
Deployment Options
API Deployment
No API pricing available
Single GPU
RTX 3080
$133/mo
Min VRAM: 3 GB
Multi-GPU
RTX 3080
710.3 tok/s
Best available config
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (RTX 3080, BF16)
Precision Impact
bf16
5.2 GB
weights/GPU
~710.3 tok/s
fp8
2.6 GB
weights/GPU
int4
1.3 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Gemma 2 2B
Self-Hosted Infrastructure
Similar Models
Gemma 1.1 2B
2.5B params · dense
Quality: 50
RecurrentGemma 2B
2.7B params · dense
Quality: 50
Phi 2
2.7B params · dense
Quality: 50
PaLI-Gemma 3B
2.9B params · dense
Quality: 50
SeamlessM4T v2 Large
2.3B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does Gemma 2 2B need for inference?
Gemma 2 2B requires approximately 5.2 GB of VRAM at BF16 precision, 2.6 GB at FP8, or 1.3 GB at INT4 quantization. Additional VRAM is needed for KV-cache (106496 bytes per token) and activations (~0.30 GB).
What is the best GPU for Gemma 2 2B?
The top recommended GPU for Gemma 2 2B is the RTX 3080 using BF16 precision. It achieves approximately 710.3 tokens/sec at an estimated cost of $133/month ($0.07/M tokens). Score: 100/100.
How much does Gemma 2 2B inference cost?
Gemma 2 2B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.