Code Llama 70B
Meta · dense · 70B parameters · 16,384 context
Parameters
70B
Context Window
16K tokens
Architecture
Dense
Best GPU
H200 SXM
Cheapest API
$0.90/M
Quality Score
60/100
Intelligence Brief
Code Llama 70B is a 70B parameter DENSE model from Meta, featuring Grouped Query Attention (GQA) with 80 layers and 8,192 hidden dimensions. With a 16,384 token context window, it supports code, math. On standardized benchmarks, it achieves MMLU 62, HumanEval 53, GSM8K 55. The most cost-effective API deployment is via together at $0.90/M output tokens. For self-hosted inference, H200 SXM delivers optimal throughput at $2553/month.
Architecture Details
Memory Requirements
BF16 Weights
140.0 GB
FP8 Weights
70.0 GB
INT4 Weights
35.0 GB
GPU Compatibility Matrix
Code Llama 70B is compatible with 38% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Latency (ITL)
1.8ms
Est. TTFT
0ms
Cost/Month
$2553
Cost/M Tokens
$1.73
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
478.0 tok/s
Latency (ITL)
2.1ms
Est. TTFT
0ms
Cost/Month
$940
Cost/M Tokens
$0.75
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
478.0 tok/s
Latency (ITL)
2.1ms
Est. TTFT
0ms
Cost/Month
$2838
Cost/M Tokens
$2.26
Deployment Options
API Deployment
together
$0.90/M
output tokens
Single GPU
H200 SXM
$2553/mo
Min VRAM: 70 GB
Multi-GPU
H100 SXM x2
560.0 tok/s
TP· $3587/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| together | $0.90 | $0.90 | Cheapest |
| fireworks | $0.90 | $0.90 |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| togetherBest Value | $0.90 | $0.90 | $9 |
| fireworks | $0.90 | $0.90 | $9 |
Cost per 1,000 Requests
Short (500 tok)
$0.63
via together
Medium (2K tok)
$2.52
via together
Long (8K tok)
$9.00
via together
Performance Estimates
Throughput by GPU
VRAM Breakdown (H200 SXM, FP8)
Precision Impact
bf16
140.0 GB
weights/GPU
fp8
70.0 GB
weights/GPU
~560.0 tok/s
int4
35.0 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Code Llama 70B
Self-Hosted Infrastructure
Similar Models
Code Llama 34B
34B params · dense
Quality: 55
from $0.78/M
Llama 2 70B
70B params · dense
Quality: 62
from $0.90/M
WizardMath 70B
70B params · dense
Quality: 50
Claude Sonnet 4
70B params · dense
Quality: 86
from $15.00/M
o1-mini
70B params · dense
Quality: 83
from $12.00/M
Frequently Asked Questions
How much VRAM does Code Llama 70B need for inference?
Code Llama 70B requires approximately 140.0 GB of VRAM at BF16 precision, 70.0 GB at FP8, or 35.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (327680 bytes per token) and activations (~2.50 GB).
What is the best GPU for Code Llama 70B?
The top recommended GPU for Code Llama 70B is the H200 SXM using FP8 precision. It achieves approximately 560.0 tokens/sec at an estimated cost of $2553/month ($1.73/M tokens). Score: 100/100.
How much does Code Llama 70B inference cost?
Code Llama 70B API inference starts from $0.90/M input tokens and $0.90/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.