OpenELM 3B
Apple · dense · 3B parameters · 2,048 context
Parameters
3B
Context Window
2K tokens
Architecture
Dense
Best GPU
RTX 4070 Ti
Intelligence Brief
OpenELM 3B is a 3B parameter DENSE model from Apple, featuring Grouped Query Attention (GQA) with 36 layers and 3,072 hidden dimensions. With a 2,048 token context window, it supports general text generation. For self-hosted inference, RTX 4070 Ti delivers optimal throughput at $237/month.
Architecture Details
Memory Requirements
BF16 Weights
6.0 GB
FP8 Weights
3.0 GB
INT4 Weights
1.5 GB
GPU Compatibility Matrix
OpenELM 3B is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
453.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$237
Cost/M Tokens
$0.20
BF16 · 1 GPU · vllm
100/100
score
Throughput
684.0 tok/s
Latency (ITL)
1.5ms
Est. TTFT
0ms
Cost/Month
$133
Cost/M Tokens
$0.07
BF16 · 1 GPU · vllm
100/100
score
Throughput
453.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$209
Cost/M Tokens
$0.18
Deployment Options
API Deployment
No API pricing available
Single GPU
RTX 4070 Ti
$237/mo
Min VRAM: 3 GB
Multi-GPU
RTX 4070 Ti
453.6 tok/s
Best available config
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (RTX 4070 Ti, BF16)
Precision Impact
bf16
6.0 GB
weights/GPU
~453.6 tok/s
fp8
3.0 GB
weights/GPU
int4
1.5 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy OpenELM 3B
Self-Hosted Infrastructure
Similar Models
VILA 1.5 3B
3B params · dense
Quality: 44
from $0.08/M
StableLM Zephyr 3B
3B params · dense
Quality: 50
BTLM 3B
3B params · dense
Quality: 50
Falcon 3 3B
3B params · dense
Quality: 50
StarCoder2 3B
3.03B params · dense
Quality: 29
from $0.10/M
Frequently Asked Questions
How much VRAM does OpenELM 3B need for inference?
OpenELM 3B requires approximately 6.0 GB of VRAM at BF16 precision, 3.0 GB at FP8, or 1.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (73728 bytes per token) and activations (~0.30 GB).
What is the best GPU for OpenELM 3B?
The top recommended GPU for OpenELM 3B is the RTX 4070 Ti using BF16 precision. It achieves approximately 453.6 tokens/sec at an estimated cost of $237/month ($0.20/M tokens). Score: 100/100.
How much does OpenELM 3B inference cost?
OpenELM 3B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.