Updated minutes ago
TinyLlama 1.1B Chat
TinyLlama · dense · 1.1B parameters · 2,048 context
Quality50.0
Architecture Details
TypeDENSE
Total Parameters1.1B
Active Parameters1.1B
Layers22
Hidden Dimension2,048
Attention Heads32
KV Heads4
Head Dimension64
Vocab Size32,000
Memory Requirements
BF16 Weights
2.2 GB
FP8 Weights
1.1 GB
INT4 Weights
0.6 GB
KV-Cache per Token11264 bytes
Activation Estimate0.15 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM BF16H100 PCIe BF16H100 NVL BF16
GPU Recommendations
RTX 3080optimal
BF16 · 1 GPU · vllm
90/100
score
Throughput
1.8K tok/s
Cost/Month
$133
Cost/M Tokens
$0.03
RTX 4060optimal
BF16 · 1 GPU · vllm
90/100
score
Throughput
634.2 tok/s
Cost/Month
$209
Cost/M Tokens
$0.13
RTX 3070optimal
BF16 · 1 GPU · vllm
90/100
score
Throughput
1.0K tok/s
Cost/Month
$85
Cost/M Tokens
$0.03
API Pricing Comparison
No API pricing data available for this model.
Capabilities
Features
✗ Tool Use✗ Vision✗ Code✗ Math✗ Reasoning✗ Multilingual✗ Structured Output
Supported Frameworks
vllmsglangtgiollamallama-cpp
Supported Precisions
BF16 (default)FP8INT4