Phi 4 Mini
Microsoft · dense · 3.8B parameters · 131,072 context
Parameters
3.8B
Context Window
128K tokens
Architecture
Dense
Best GPU
A4000
Quality Score
70/100
Intelligence Brief
Phi 4 Mini is a 3.8B parameter DENSE model from Microsoft, featuring Grouped Query Attention (GQA) with 32 layers and 3,072 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual, reasoning. On standardized benchmarks, it achieves MMLU 72, HumanEval 55, GSM8K 80. For self-hosted inference, A4000 delivers optimal throughput at $161/month.
Architecture Details
Memory Requirements
BF16 Weights
7.6 GB
FP8 Weights
3.8 GB
INT4 Weights
1.9 GB
GPU Compatibility Matrix
Phi 4 Mini is compatible with 100% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
100/100
score
Throughput
318.3 tok/s
Latency (ITL)
3.1ms
Est. TTFT
1ms
Cost/Month
$161
Cost/M Tokens
$0.19
BF16 · 1 GPU · vllm
100/100
score
Throughput
509.4 tok/s
Latency (ITL)
2.0ms
Est. TTFT
0ms
Cost/Month
$304
Cost/M Tokens
$0.23
BF16 · 1 GPU · vllm
100/100
score
Throughput
358.1 tok/s
Latency (ITL)
2.8ms
Est. TTFT
0ms
Cost/Month
$237
Cost/M Tokens
$0.25
Deployment Options
API Deployment
No API pricing available
Single GPU
A4000
$161/mo
Min VRAM: 4 GB
Multi-GPU
RTX 3070 x2
454.5 tok/s
TP· $171/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (A4000, BF16)
Precision Impact
bf16
7.6 GB
weights/GPU
~318.3 tok/s
fp8
3.8 GB
weights/GPU
int4
1.9 GB
weights/GPU
Quality Benchmarks
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy Phi 4 Mini
Self-Hosted Infrastructure
Similar Models
Phi 2
2.7B params · dense
Quality: 50
Phi 1.5
1.3B params · dense
Quality: 38
Phi 1
1.3B params · dense
Quality: 38
Phi 3 Mini 3.8B
3.8B params · dense
Quality: 64
Minitron 4B
4B params · dense
Quality: 50
from $0.06/M
Frequently Asked Questions
How much VRAM does Phi 4 Mini need for inference?
Phi 4 Mini requires approximately 7.6 GB of VRAM at BF16 precision, 3.8 GB at FP8, or 1.9 GB at INT4 quantization. Additional VRAM is needed for KV-cache (65536 bytes per token) and activations (~0.50 GB).
What is the best GPU for Phi 4 Mini?
The top recommended GPU for Phi 4 Mini is the A4000 using BF16 precision. It achieves approximately 318.3 tokens/sec at an estimated cost of $161/month ($0.19/M tokens). Score: 100/100.
How much does Phi 4 Mini inference cost?
Phi 4 Mini inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.