Gemma 2 27B
Google · dense · 27B parameters · 8,192 context
Parameters
27B
Context Window
8K tokens
Architecture
Dense
Best GPU
H20
Cheapest API
$0.27/M
Quality Score
65/100
Intelligence Brief
Gemma 2 27B is a 27B parameter DENSE model from Google, featuring Grouped Query Attention (GQA) with 46 layers and 4,608 hidden dimensions. With a 8,192 token context window, it supports structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 75.2, HumanEval 45, GSM8K 80. The most cost-effective API deployment is via deepinfra at $0.27/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $940/month.
Architecture Details
Memory Requirements
BF16 Weights
54.0 GB
FP8 Weights
27.0 GB
INT4 Weights
13.5 GB
GPU Compatibility Matrix
Gemma 2 27B is compatible with 62% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$940
Cost/M Tokens
$0.34
FP8 · 1 GPU · tensorrt-llm
95/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$2553
Cost/M Tokens
$0.93
FP8 · 1 GPU · tensorrt-llm
95/100
score
Throughput
1.0K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.66
Deployment Options
API Deployment
deepinfra
$0.27/M
output tokens
Single GPU
H20
$940/mo
Min VRAM: 27 GB
Multi-GPU
A100 40GB SXM x2
306.1 tok/s
TP· $1613/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| deepinfra | $0.27 | $0.27 | Cheapest |
| together | $0.30 | $0.30 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| deepinfraBest Value | $0.27 | $0.27 | $3 |
| together | $0.30 | $0.30 | $3 |
Cost per 1,000 Requests
Short (500 tok)
$0.19
via deepinfra
Medium (2K tok)
$0.76
via deepinfra
Long (8K tok)
$2.70
via deepinfra
Performance Estimates
Throughput by GPU
VRAM Breakdown (H20, FP8)
Precision Impact
bf16
54.0 GB
weights/GPU
fp8
27.0 GB
weights/GPU
~1.1K tok/s
int4
13.5 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Gemma 2 27B
Self-Hosted Infrastructure
Similar Models
Gemma 2 9B
9.2B params · dense
Quality: 68
from $0.10/M
Gemma 3 27B
27B params · dense
Quality: 69
from $0.20/M
InternVL2 26B
26B params · dense
Quality: 50
Mistral Small 24B
24B params · dense
Quality: 68
from $0.30/M
Mistral Small 3.1 24B
24B params · dense
Quality: 50
from $0.30/M
Frequently Asked Questions
How much VRAM does Gemma 2 27B need for inference?
Gemma 2 27B requires approximately 54.0 GB of VRAM at BF16 precision, 27.0 GB at FP8, or 13.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (376832 bytes per token) and activations (~1.50 GB).
What is the best GPU for Gemma 2 27B?
The top recommended GPU for Gemma 2 27B is the H20 using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $940/month ($0.34/M tokens). Score: 100/100.
How much does Gemma 2 27B inference cost?
Gemma 2 27B API inference starts from $0.27/M input tokens and $0.27/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.