Skip to content
Updated minutes ago
MosaicML

MPT 30B

MosaicML · dense · 30B parameters · 8,192 context

Quality
48.0

Parameters

30B

Context Window

8K tokens

Architecture

Dense

Best GPU

H20

Quality Score

48/100

Intelligence Brief

MPT 30B is a 30B parameter DENSE model from MosaicML, featuring Grouped Query Attention (GQA) with 48 layers and 7,168 hidden dimensions. With a 8,192 token context window, it supports code. On standardized benchmarks, it achieves MMLU 56, HumanEval 28, GSM8K 40. For self-hosted inference, H20 delivers optimal throughput at $940/month.

Architecture Details

TypeDENSE
Total Parameters30B
Active Parameters30B
Layers48
Hidden Dimension7,168
Attention Heads64
KV Heads8
Head Dimension112
Vocab Size50,432

Memory Requirements

BF16 Weights

60.0 GB

FP8 Weights

30.0 GB

INT4 Weights

15.0 GB

KV-Cache per Token344064 bytes
Activation Estimate1.50 GB

GPU Compatibility Matrix

MPT 30B is compatible with 62% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.0K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.36

Use this config →
H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$2553

Cost/M Tokens

$0.93

Use this config →
H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

840.8 tok/s

Latency (ITL)

1.2ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.81

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

H20

$940/mo

Min VRAM: 30 GB

Scale

Multi-GPU

RTX A6000 x2

108.9 tok/s

TP· $930/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

H20
1.0K tok/s
H200 SXM
1.1K tok/s
H100 SXM
840.8 tok/s

VRAM Breakdown (H20, FP8)

Weights
Act
Weights 30.0 GBKV-Cache 1.4 GBActivations 12.0 GBOverhead 1.5 GB

Precision Impact

bf16

60.0 GB

weights/GPU

fp8

30.0 GB

weights/GPU

~1.0K tok/s

int4

15.0 GB

weights/GPU

Quality Benchmarks

Bottom 25%
6th percentile across all models
MMLU
56.0
Bottom 25% (16th pctile)
HumanEval
28.0
Bottom 25% (8th pctile)
GSM8K
40.0
Bottom 25% (9th pctile)
MT-Bench
68.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmtgi

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy MPT 30B

Similar Models

Larger context, Higher qualityCompare →

Gemma 4 31B-IT

31B params · dense

Quality: 77

from $0.30/M

Larger context, Higher qualityCompare →

Qwen 2.5 32B

32.5B params · dense

Quality: 73

from $0.80/M

Larger context, Higher qualityCompare →
Larger context, Higher qualityCompare →

Frequently Asked Questions

How much VRAM does MPT 30B need for inference?

MPT 30B requires approximately 60.0 GB of VRAM at BF16 precision, 30.0 GB at FP8, or 15.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (344064 bytes per token) and activations (~1.50 GB).

What is the best GPU for MPT 30B?

The top recommended GPU for MPT 30B is the H20 using FP8 precision. It achieves approximately 1.0K tokens/sec at an estimated cost of $940/month ($0.36/M tokens). Score: 100/100.

How much does MPT 30B inference cost?

MPT 30B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.