MPT 7B
MosaicML · dense · 6.7B parameters · 65,536 context
Parameters
6.7B
Context Window
64K tokens
Architecture
Dense
Best GPU
A10G
Quality Score
36/100
Intelligence Brief
MPT 7B is a 6.7B parameter DENSE model from MosaicML, featuring Multi-Head Attention (MHA) with 32 layers and 4,096 hidden dimensions. With a 65,536 token context window, it supports code. On standardized benchmarks, it achieves MMLU 42, HumanEval 18, GSM8K 28. For self-hosted inference, A10G delivers optimal throughput at $285/month.
Architecture Details
Memory Requirements
BF16 Weights
13.4 GB
FP8 Weights
6.7 GB
INT4 Weights
3.4 GB
GPU Compatibility Matrix
MPT 7B is compatible with 95% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
241.8 tok/s
Latency (ITL)
4.1ms
Est. TTFT
1ms
Cost/Month
$285
Cost/M Tokens
$0.45
BF16 · 1 GPU · vllm
100/100
score
Throughput
376.0 tok/s
Latency (ITL)
2.7ms
Est. TTFT
0ms
Cost/Month
$332
Cost/M Tokens
$0.34
BF16 · 1 GPU · vllm
100/100
score
Throughput
406.2 tok/s
Latency (ITL)
2.5ms
Est. TTFT
0ms
Cost/Month
$370
Cost/M Tokens
$0.35
Deployment Options
API Deployment
No API pricing available
Single GPU
A10G
$285/mo
Min VRAM: 7 GB
Multi-GPU
RTX 3080 x2
470.5 tok/s
TP· $266/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (A10G, BF16)
Precision Impact
bf16
13.4 GB
weights/GPU
~241.8 tok/s
fp8
6.7 GB
weights/GPU
int4
3.4 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy MPT 7B
Self-Hosted Infrastructure
Similar Models
DeepSeek Coder 6.7B
6.7B params · dense
Quality: 50
from $0.20/M
StarCoder2 7B
6.73B params · dense
Quality: 35
from $0.15/M
OLMo 2 7B
7B params · dense
Quality: 50
Command R 7B
7B params · dense
Quality: 68
from $0.15/M
Falcon 7B
7B params · dense
Quality: 37
from $0.15/M
Frequently Asked Questions
How much VRAM does MPT 7B need for inference?
MPT 7B requires approximately 13.4 GB of VRAM at BF16 precision, 6.7 GB at FP8, or 3.4 GB at INT4 quantization. Additional VRAM is needed for KV-cache (262144 bytes per token) and activations (~0.80 GB).
What is the best GPU for MPT 7B?
The top recommended GPU for MPT 7B is the A10G using BF16 precision. It achieves approximately 241.8 tokens/sec at an estimated cost of $285/month ($0.45/M tokens). Score: 100/100.
How much does MPT 7B inference cost?
MPT 7B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.