Skip to content
Updated minutes ago
Mistral

Mistral Small 24B

Mistral AI · dense · 24B parameters · 32,768 context

Quality
68.0

Parameters

24B

Context Window

32K tokens

Architecture

Dense

Best GPU

H20

Cheapest API

$0.30/M

Quality Score

68/100

Intelligence Brief

Mistral Small 24B is a 24B parameter DENSE model from Mistral AI, featuring Grouped Query Attention (GQA) with 40 layers and 6,144 hidden dimensions. With a 32,768 token context window, it supports tools, structured output, code, math, multilingual. On standardized benchmarks, it achieves MMLU 72, HumanEval 45, GSM8K 70. The most cost-effective API deployment is via mistral at $0.30/M output tokens. For self-hosted inference, H20 delivers optimal throughput at $940/month.

Architecture Details

TypeDENSE
Total Parameters24B
Active Parameters24B
Layers40
Hidden Dimension6,144
Attention Heads48
KV Heads8
Head Dimension128
Vocab Size32,768

Memory Requirements

BF16 Weights

48.0 GB

FP8 Weights

24.0 GB

INT4 Weights

12.0 GB

KV-Cache per Token163840 bytes
Activation Estimate1.50 GB

GPU Compatibility Matrix

Mistral Small 24B is compatible with 62% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.34

Use this config →
H100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

1.1K tok/s

Latency (ITL)

1.0ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.65

Use this config →
H100 PCIeoptimal

FP8 · 1 GPU · tensorrt-llm

95/100

score

Throughput

697.1 tok/s

Latency (ITL)

1.4ms

Est. TTFT

0ms

Cost/Month

$1794

Cost/M Tokens

$0.98

Use this config →

Deployment Options

API

API Deployment

mistral

$0.30/M

output tokens

Self-Hosted

Single GPU

H20

$940/mo

Min VRAM: 24 GB

Scale

Multi-GPU

A100 40GB SXM x2

344.0 tok/s

TP· $1613/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
mistral$0.10$0.30
Cheapest
together$0.30$0.30

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
mistralBest Value$0.10$0.30$2
together$0.30$0.30$3

Cost per 1,000 Requests

Short (500 tok)

$0.11

via mistral

Medium (2K tok)

$0.44

via mistral

Long (8K tok)

$1.40

via mistral

Performance Estimates

Throughput by GPU

H20
1.1K tok/s
H100 SXM
1.1K tok/s
H100 PCIe
697.1 tok/s

VRAM Breakdown (H20, FP8)

Weights
Act
Weights 24.0 GBKV-Cache 1.3 GBActivations 12.0 GBOverhead 1.2 GB

Precision Impact

bf16

48.0 GB

weights/GPU

fp8

24.0 GB

weights/GPU

~1.1K tok/s

int4

12.0 GB

weights/GPU

Quality Benchmarks

Average
71th percentile across all models
MMLU
72.0
Below Average (36th pctile)
HumanEval
45.0
Below Average (39th pctile)
GSM8K
70.0
Below Average (32th pctile)
MT-Bench
77.0
Bottom 25% (0th pctile)

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llmollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Mistral Small 24B

Similar Models

Larger context, Lower qualityCompare →

Codestral 22B

22B params · dense

Quality: 63

from $0.90/M

More expensiveCompare →

Solar Pro 22B

22B params · dense

Quality: 50

from $0.50/M

Smaller context, Lower quality, More expensiveCompare →

Gemma 2 27B

27B params · dense

Quality: 65

from $0.27/M

Smaller contextCompare →

Frequently Asked Questions

How much VRAM does Mistral Small 24B need for inference?

Mistral Small 24B requires approximately 48.0 GB of VRAM at BF16 precision, 24.0 GB at FP8, or 12.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (163840 bytes per token) and activations (~1.50 GB).

What is the best GPU for Mistral Small 24B?

The top recommended GPU for Mistral Small 24B is the H20 using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $940/month ($0.34/M tokens). Score: 100/100.

How much does Mistral Small 24B inference cost?

Mistral Small 24B API inference starts from $0.10/M input tokens and $0.30/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.