Skip to content
Updated minutes ago
Meta

Code Llama 34B

Meta · dense · 34B parameters · 100,000 context

Quality
55.0

Parameters

34B

Context Window

98K tokens

Architecture

Dense

Best GPU

H20

Cheapest API

$0.78/M

Quality Score

55/100

Intelligence Brief

Code Llama 34B is a 34B parameter DENSE model from Meta, featuring Grouped Query Attention (GQA) with 48 layers and 8,192 hidden dimensions. With a 100,000 token context window, it supports code, math. On standardized benchmarks, it achieves MMLU 56, HumanEval 48.8, GSM8K 45. The most cost-effective API deployment is via together at $0.78/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $940/month.

Architecture Details

TypeDENSE
Total Parameters34B
Active Parameters34B
Layers48
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size32,016

Memory Requirements

BF16 Weights

68.0 GB

FP8 Weights

34.0 GB

INT4 Weights

17.0 GB

KV-Cache per Token196608 bytes
Activation Estimate2.00 GB

GPU Compatibility Matrix

Code Llama 34B is compatible with 57% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

984.2 tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.36

Use this config →
B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

98/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$4261

Cost/M Tokens

$1.54

Use this config →
H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$2553

Cost/M Tokens

$0.93

Use this config →

Deployment Options

API

API Deployment

together

$0.78/M

output tokens

Self-Hosted

Single GPU

H20

$940/mo

Min VRAM: 34 GB

Scale

Multi-GPU

RTX A6000 x2

107.6 tok/s

TP· $930/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$0.78$0.78
Cheapest

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
togetherBest Value$0.78$0.78$8

Cost per 1,000 Requests

Short (500 tok)

$0.55

via together

Medium (2K tok)

$2.18

via together

Long (8K tok)

$7.80

via together

Performance Estimates

Throughput by GPU

H20
984.2 tok/s
B200 SXM
1.1K tok/s
H200 SXM
1.1K tok/s

VRAM Breakdown (H20, FP8)

Weights
Act
Weights 34.0 GBKV-Cache 1.6 GBActivations 16.0 GBOverhead 1.7 GB

Precision Impact

bf16

68.0 GB

weights/GPU

fp8

34.0 GB

weights/GPU

~984.2 tok/s

int4

17.0 GB

weights/GPU

Quality Benchmarks

Average
60th percentile across all models
MMLU
56.0
Bottom 25% (16th pctile)
HumanEval
48.8
Below Average (46th pctile)
GSM8K
45.0
Bottom 25% (14th pctile)
MT-Bench
68.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llmollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Code Llama 34B

Similar Models

Code Llama 13B

13B params · dense

Quality: 44

from $0.22/M

Smaller context, Lower quality, Cheaper, Smaller modelCompare →

Code Llama 70B

70B params · dense

Quality: 60

from $0.90/M

Smaller context, Larger modelCompare →

Yi 1.5 34B

34.4B params · dense

Quality: 72

from $0.80/M

Larger context, Higher qualityCompare →

Aya 23 35B

35B params · dense

Quality: 50

from $1.50/M

More expensiveCompare →

Command R

35B params · dense

Quality: 68

from $0.50/M

Higher quality, CheaperCompare →

Frequently Asked Questions

How much VRAM does Code Llama 34B need for inference?

Code Llama 34B requires approximately 68.0 GB of VRAM at BF16 precision, 34.0 GB at FP8, or 17.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (196608 bytes per token) and activations (~2.00 GB).

What is the best GPU for Code Llama 34B?

The top recommended GPU for Code Llama 34B is the H20 using FP8 precision. It achieves approximately 984.2 tokens/sec at an estimated cost of $940/month ($0.36/M tokens). Score: 100/100.

How much does Code Llama 34B inference cost?

Code Llama 34B API inference starts from $0.78/M input tokens and $0.78/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.