Skip to content
Updated minutes ago
Meta

Llama Guard 3 1B

Meta · dense · 1B parameters · 131,072 context

Quality
50.0

Parameters

1B

Context Window

128K tokens

Architecture

Dense

Best GPU

RTX 3080

Intelligence Brief

Llama Guard 3 1B is a 1B parameter DENSE model from Meta, featuring Grouped Query Attention (GQA) with 16 layers and 2,048 hidden dimensions. With a 131,072 token context window, it supports structured output, multilingual. For self-hosted inference, RTX 3080 delivers optimal throughput at $133/month.

Architecture Details

TypeDENSE
Total Parameters1B
Active Parameters1B
Layers16
Hidden Dimension2,048
Attention Heads32
KV Heads8
Head Dimension64
Vocab Size128,256

Memory Requirements

BF16 Weights

2.0 GB

FP8 Weights

1.0 GB

INT4 Weights

0.5 GB

KV-Cache per Token16384 bytes
Activation Estimate0.20 GB

GPU Compatibility Matrix

Llama Guard 3 1B is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

RTX 3080optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

1.9K tok/s

Latency (ITL)

0.5ms

Est. TTFT

0ms

Cost/Month

$133

Cost/M Tokens

$0.03

Use this config →
RTX 4060optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

697.6 tok/s

Latency (ITL)

1.4ms

Est. TTFT

0ms

Cost/Month

$209

Cost/M Tokens

$0.11

Use this config →
RTX 3070optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

1.1K tok/s

Latency (ITL)

0.9ms

Est. TTFT

0ms

Cost/Month

$85

Cost/M Tokens

$0.03

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

RTX 3080

$133/mo

Min VRAM: 1 GB

Scale

Multi-GPU

RTX 3080

1.9K tok/s

Best available config

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

RTX 3080
1.9K tok/s
RTX 4060
697.6 tok/s
RTX 3070
1.1K tok/s

VRAM Breakdown (RTX 3080, BF16)

Weights
Act
Weights 2.0 GBKV-Cache 0.5 GBActivations 1.6 GBOverhead 0.2 GB

Precision Impact

bf16

2.0 GB

weights/GPU

~1.9K tok/s

fp8

1.0 GB

weights/GPU

int4

0.5 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgiollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Llama Guard 3 1B

Similar Models

Smaller context, Lower qualityCompare →

Canary 1B

1B params · dense

Quality: 50

from $0.04/M

Smaller contextCompare →
Smaller contextCompare →
Smaller contextCompare →

Frequently Asked Questions

How much VRAM does Llama Guard 3 1B need for inference?

Llama Guard 3 1B requires approximately 2.0 GB of VRAM at BF16 precision, 1.0 GB at FP8, or 0.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (16384 bytes per token) and activations (~0.20 GB).

What is the best GPU for Llama Guard 3 1B?

The top recommended GPU for Llama Guard 3 1B is the RTX 3080 using BF16 precision. It achieves approximately 1.9K tokens/sec at an estimated cost of $133/month ($0.03/M tokens). Score: 90/100.

How much does Llama Guard 3 1B inference cost?

Llama Guard 3 1B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.