SmolLM 135M
Hugging Face · dense · 0.135B parameters · 2,048 context
Parameters
0.135B
Context Window
2K tokens
Architecture
Dense
Best GPU
B200 SXM
Intelligence Brief
SmolLM 135M is a 0.135B parameter DENSE model from Hugging Face, featuring Grouped Query Attention (GQA) with 30 layers and 576 hidden dimensions. With a 2,048 token context window, it supports general text generation. For self-hosted inference, B200 SXM delivers optimal throughput at $4261/month.
Architecture Details
Memory Requirements
BF16 Weights
0.3 GB
FP8 Weights
0.1 GB
INT4 Weights
0.1 GB
GPU Compatibility Matrix
SmolLM 135M is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
83/100
score
Throughput
3.5K tok/s
Latency (ITL)
0.3ms
Est. TTFT
0ms
Cost/Month
$4261
Cost/M Tokens
$0.46
FP8 · 1 GPU · tensorrt-llm
83/100
score
Throughput
3.5K tok/s
Latency (ITL)
0.3ms
Est. TTFT
0ms
Cost/Month
$4271
Cost/M Tokens
$0.46
FP8 · 1 GPU · tensorrt-llm
83/100
score
Throughput
3.5K tok/s
Latency (ITL)
0.3ms
Est. TTFT
0ms
Cost/Month
$6169
Cost/M Tokens
$0.67
Deployment Options
API Deployment
No API pricing available
Single GPU
B200 SXM
$4261/mo
Min VRAM: 0 GB
Multi-GPU
B200 SXM
3.5K tok/s
Best available config
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (B200 SXM, FP8)
Precision Impact
bf16
0.3 GB
weights/GPU
fp8
0.1 GB
weights/GPU
~3.5K tok/s
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy SmolLM 135M
Self-Hosted Infrastructure
Similar Models
SmolLM 360M
0.36B params · dense
Quality: 50
Nomic Embed Text v1.5
0.137B params · dense
Quality: 50
from $0.01/M
BGE Base EN v1.5
0.11B params · dense
Quality: 50
Kokoro TTS 82M
0.082B params · dense
Quality: 50
Whisper Base
0.074B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does SmolLM 135M need for inference?
SmolLM 135M requires approximately 0.3 GB of VRAM at BF16 precision, 0.1 GB at FP8, or 0.1 GB at INT4 quantization. Additional VRAM is needed for KV-cache (11520 bytes per token) and activations (~0.03 GB).
What is the best GPU for SmolLM 135M?
The top recommended GPU for SmolLM 135M is the B200 SXM using FP8 precision. It achieves approximately 3.5K tokens/sec at an estimated cost of $4261/month ($0.46/M tokens). Score: 83/100.
How much does SmolLM 135M inference cost?
SmolLM 135M inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.