Skip to content
Updated minutes ago
Meta

TinyLlama 1.1B Chat

TinyLlama · dense · 1.1B parameters · 2,048 context

Quality
50.0

Parameters

1.1B

Context Window

2K tokens

Architecture

Dense

Best GPU

RTX 3080

Intelligence Brief

TinyLlama 1.1B Chat is a 1.1B parameter DENSE model from TinyLlama, featuring Grouped Query Attention (GQA) with 22 layers and 2,048 hidden dimensions. With a 2,048 token context window, it supports general text generation. For self-hosted inference, RTX 3080 delivers optimal throughput at $133/month.

Architecture Details

TypeDENSE
Total Parameters1.1B
Active Parameters1.1B
Layers22
Hidden Dimension2,048
Attention Heads32
KV Heads4
Head Dimension64
Vocab Size32,000

Memory Requirements

BF16 Weights

2.2 GB

FP8 Weights

1.1 GB

INT4 Weights

0.6 GB

KV-Cache per Token11264 bytes
Activation Estimate0.15 GB

GPU Compatibility Matrix

TinyLlama 1.1B Chat is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

RTX 3080optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

1.8K tok/s

Latency (ITL)

0.6ms

Est. TTFT

0ms

Cost/Month

$133

Cost/M Tokens

$0.03

Use this config →
RTX 4060optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

634.2 tok/s

Latency (ITL)

1.6ms

Est. TTFT

0ms

Cost/Month

$209

Cost/M Tokens

$0.13

Use this config →
RTX 3070optimal

BF16 · 1 GPU · vllm

90/100

score

Throughput

1.0K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$85

Cost/M Tokens

$0.03

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

RTX 3080

$133/mo

Min VRAM: 1 GB

Scale

Multi-GPU

RTX 3080

1.8K tok/s

Best available config

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

RTX 3080
1.8K tok/s
RTX 4060
634.2 tok/s
RTX 3070
1.0K tok/s

VRAM Breakdown (RTX 3080, BF16)

Weights
Act
Weights 2.2 GBKV-Cache 0.4 GBActivations 1.2 GBOverhead 0.2 GB

Precision Impact

bf16

2.2 GB

weights/GPU

~1.8K tok/s

fp8

1.1 GB

weights/GPU

int4

0.6 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgiollamallama-cpp

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy TinyLlama 1.1B Chat

Similar Models

Larger contextCompare →
Larger context, Lower qualityCompare →

Canary 1B

1B params · dense

Quality: 50

from $0.04/M

Larger contextCompare →

Frequently Asked Questions

How much VRAM does TinyLlama 1.1B Chat need for inference?

TinyLlama 1.1B Chat requires approximately 2.2 GB of VRAM at BF16 precision, 1.1 GB at FP8, or 0.6 GB at INT4 quantization. Additional VRAM is needed for KV-cache (11264 bytes per token) and activations (~0.15 GB).

What is the best GPU for TinyLlama 1.1B Chat?

The top recommended GPU for TinyLlama 1.1B Chat is the RTX 3080 using BF16 precision. It achieves approximately 1.8K tokens/sec at an estimated cost of $133/month ($0.03/M tokens). Score: 90/100.

How much does TinyLlama 1.1B Chat inference cost?

TinyLlama 1.1B Chat inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.