Skip to content
Updated minutes ago
Apple

OpenELM 3B

Apple · dense · 3B parameters · 2,048 context

Quality
50.0

Parameters

3B

Context Window

2K tokens

Architecture

Dense

Best GPU

RTX 4070 Ti

Intelligence Brief

OpenELM 3B is a 3B parameter DENSE model from Apple, featuring Grouped Query Attention (GQA) with 36 layers and 3,072 hidden dimensions. With a 2,048 token context window, it supports general text generation. For self-hosted inference, RTX 4070 Ti delivers optimal throughput at $237/month.

Architecture Details

TypeDENSE
Total Parameters3B
Active Parameters3B
Layers36
Hidden Dimension3,072
Attention Heads24
KV Heads4
Head Dimension128
Vocab Size32,000

Memory Requirements

BF16 Weights

6.0 GB

FP8 Weights

3.0 GB

INT4 Weights

1.5 GB

KV-Cache per Token73728 bytes
Activation Estimate0.30 GB

GPU Compatibility Matrix

OpenELM 3B is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

RTX 4070 Tioptimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

453.6 tok/s

Latency (ITL)

2.2ms

Est. TTFT

0ms

Cost/Month

$237

Cost/M Tokens

$0.20

Use this config →
RTX 3080optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

684.0 tok/s

Latency (ITL)

1.5ms

Est. TTFT

0ms

Cost/Month

$133

Cost/M Tokens

$0.07

Use this config →
RTX 4070 Superoptimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

453.6 tok/s

Latency (ITL)

2.2ms

Est. TTFT

0ms

Cost/Month

$209

Cost/M Tokens

$0.18

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

RTX 4070 Ti

$237/mo

Min VRAM: 3 GB

Scale

Multi-GPU

RTX 4070 Ti

453.6 tok/s

Best available config

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

RTX 4070 Ti
453.6 tok/s
RTX 3080
684.0 tok/s
RTX 4070 Super
453.6 tok/s

VRAM Breakdown (RTX 4070 Ti, BF16)

Weights
Act
Weights 6.0 GBKV-Cache 1.2 GBActivations 2.4 GBOverhead 0.5 GB

Precision Impact

bf16

6.0 GB

weights/GPU

~453.6 tok/s

fp8

3.0 GB

weights/GPU

int4

1.5 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy OpenELM 3B

Similar Models

VILA 1.5 3B

3B params · dense

Quality: 44

from $0.08/M

Larger context, Lower qualityCompare →
Larger contextCompare →

StarCoder2 3B

3.03B params · dense

Quality: 29

from $0.10/M

Larger context, Lower qualityCompare →

Frequently Asked Questions

How much VRAM does OpenELM 3B need for inference?

OpenELM 3B requires approximately 6.0 GB of VRAM at BF16 precision, 3.0 GB at FP8, or 1.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (73728 bytes per token) and activations (~0.30 GB).

What is the best GPU for OpenELM 3B?

The top recommended GPU for OpenELM 3B is the RTX 4070 Ti using BF16 precision. It achieves approximately 453.6 tokens/sec at an estimated cost of $237/month ($0.20/M tokens). Score: 100/100.

How much does OpenELM 3B inference cost?

OpenELM 3B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.