Skip to content
Updated minutes ago
MosaicML

MPT 7B

MosaicML · dense · 6.7B parameters · 65,536 context

Quality
36.0

Parameters

6.7B

Context Window

64K tokens

Architecture

Dense

Best GPU

A10G

Quality Score

36/100

Intelligence Brief

MPT 7B is a 6.7B parameter DENSE model from MosaicML, featuring Multi-Head Attention (MHA) with 32 layers and 4,096 hidden dimensions. With a 65,536 token context window, it supports code. On standardized benchmarks, it achieves MMLU 42, HumanEval 18, GSM8K 28. For self-hosted inference, A10G delivers optimal throughput at $285/month.

Architecture Details

TypeDENSE
Total Parameters6.7B
Active Parameters6.7B
Layers32
Hidden Dimension4,096
Attention Heads32
KV Heads32
Head Dimension128
Vocab Size50,432

Memory Requirements

BF16 Weights

13.4 GB

FP8 Weights

6.7 GB

INT4 Weights

3.4 GB

KV-Cache per Token262144 bytes
Activation Estimate0.80 GB

GPU Compatibility Matrix

MPT 7B is compatible with 95% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

A10Goptimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

241.8 tok/s

Latency (ITL)

4.1ms

Est. TTFT

1ms

Cost/Month

$285

Cost/M Tokens

$0.45

Use this config →
A30optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

376.0 tok/s

Latency (ITL)

2.7ms

Est. TTFT

0ms

Cost/Month

$332

Cost/M Tokens

$0.34

Use this config →
RTX 4090optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

406.2 tok/s

Latency (ITL)

2.5ms

Est. TTFT

0ms

Cost/Month

$370

Cost/M Tokens

$0.35

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

A10G

$285/mo

Min VRAM: 7 GB

Scale

Multi-GPU

RTX 3080 x2

470.5 tok/s

TP· $266/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

A10G
241.8 tok/s
A30
376.0 tok/s
RTX 4090
406.2 tok/s

VRAM Breakdown (A10G, BF16)

Weights
KV
Act
Weights 13.4 GBKV-Cache 8.6 GBActivations 6.4 GBOverhead 1.1 GB

Precision Impact

bf16

13.4 GB

weights/GPU

~241.8 tok/s

fp8

6.7 GB

weights/GPU

int4

3.4 GB

weights/GPU

Quality Benchmarks

Bottom 25%
1th percentile across all models
MMLU
42.0
Bottom 25% (2th pctile)
HumanEval
18.0
Bottom 25% (1th pctile)
GSM8K
28.0
Bottom 25% (2th pctile)
MT-Bench
60.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmtgiollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy MPT 7B

Similar Models

Smaller context, Higher qualityCompare →

StarCoder2 7B

6.73B params · dense

Quality: 35

from $0.15/M

Smaller contextCompare →
Smaller context, Higher qualityCompare →

Command R 7B

7B params · dense

Quality: 68

from $0.15/M

Larger context, Higher qualityCompare →

Falcon 7B

7B params · dense

Quality: 37

from $0.15/M

Smaller contextCompare →

Frequently Asked Questions

How much VRAM does MPT 7B need for inference?

MPT 7B requires approximately 13.4 GB of VRAM at BF16 precision, 6.7 GB at FP8, or 3.4 GB at INT4 quantization. Additional VRAM is needed for KV-cache (262144 bytes per token) and activations (~0.80 GB).

What is the best GPU for MPT 7B?

The top recommended GPU for MPT 7B is the A10G using BF16 precision. It achieves approximately 241.8 tokens/sec at an estimated cost of $285/month ($0.45/M tokens). Score: 100/100.

How much does MPT 7B inference cost?

MPT 7B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.