Code Llama 34B
Meta · dense · 34B parameters · 100,000 context
Parameters
34B
Context Window
98K tokens
Architecture
Dense
Best GPU
H20
Cheapest API
$0.78/M
Quality Score
55/100
Intelligence Brief
Code Llama 34B is a 34B parameter DENSE model from Meta, featuring Grouped Query Attention (GQA) with 48 layers and 8,192 hidden dimensions. With a 100,000 token context window, it supports code, math. On standardized benchmarks, it achieves MMLU 56, HumanEval 48.8, GSM8K 45. The most cost-effective API deployment is via together at $0.78/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $940/month.
Architecture Details
Memory Requirements
BF16 Weights
68.0 GB
FP8 Weights
34.0 GB
INT4 Weights
17.0 GB
GPU Compatibility Matrix
Code Llama 34B is compatible with 57% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
984.2 tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$940
Cost/M Tokens
$0.36
FP8 · 1 GPU · tensorrt-llm
98/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$4261
Cost/M Tokens
$1.54
FP8 · 1 GPU · tensorrt-llm
95/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$2553
Cost/M Tokens
$0.93
Deployment Options
API Deployment
together
$0.78/M
output tokens
Single GPU
H20
$940/mo
Min VRAM: 34 GB
Multi-GPU
RTX A6000 x2
107.6 tok/s
TP· $930/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $0.78 | $0.78 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $0.78 | $0.78 | $8 |
Cost per 1,000 Requests
Short (500 tok)
$0.55
via together
Medium (2K tok)
$2.18
via together
Long (8K tok)
$7.80
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (H20, FP8)
Precision Impact
bf16
68.0 GB
weights/GPU
fp8
34.0 GB
weights/GPU
~984.2 tok/s
int4
17.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Code Llama 34B
Self-Hosted Infrastructure
Similar Models
Code Llama 13B
13B params · dense
Quality: 44
from $0.22/M
Code Llama 70B
70B params · dense
Quality: 60
from $0.90/M
Yi 1.5 34B
34.4B params · dense
Quality: 72
from $0.80/M
Aya 23 35B
35B params · dense
Quality: 50
from $1.50/M
Command R
35B params · dense
Quality: 68
from $0.50/M
Frequently Asked Questions
How much VRAM does Code Llama 34B need for inference?
Code Llama 34B requires approximately 68.0 GB of VRAM at BF16 precision, 34.0 GB at FP8, or 17.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (196608 bytes per token) and activations (~2.00 GB).
What is the best GPU for Code Llama 34B?
The top recommended GPU for Code Llama 34B is the H20 using FP8 precision. It achieves approximately 984.2 tokens/sec at an estimated cost of $940/month ($0.36/M tokens). Score: 100/100.
How much does Code Llama 34B inference cost?
Code Llama 34B API inference starts from $0.78/M input tokens and $0.78/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.