InternLM 20B
SenseTime · dense · 20B parameters · 16,384 context
Parameters
20B
Context Window
16K tokens
Architecture
Dense
Best GPU
H100 SXM
Intelligence Brief
InternLM 20B is a 20B parameter DENSE model from SenseTime, featuring Multi-Head Attention (MHA) with 60 layers and 5,120 hidden dimensions. With a 16,384 token context window, it supports code, math, multilingual. For self-hosted inference, H100 SXM delivers optimal throughput at $1794/month.
Architecture Details
Memory Requirements
BF16 Weights
40.0 GB
FP8 Weights
20.0 GB
INT4 Weights
10.0 GB
GPU Compatibility Matrix
InternLM 20B is compatible with 74% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.65
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
1.1K tok/s
Latency (ITL)
1.0ms
Est. TTFT
0ms
Cost/Month
$940
Cost/M Tokens
$0.34
FP8 · 1 GPU · tensorrt-llm
95/100
score
Throughput
836.6 tok/s
Latency (ITL)
1.2ms
Est. TTFT
0ms
Cost/Month
$1794
Cost/M Tokens
$0.82
Deployment Options
API Deployment
No API pricing available
Single GPU
H100 SXM
$1794/mo
Min VRAM: 20 GB
Multi-GPU
A100 40GB SXM x2
412.1 tok/s
TP· $1613/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (H100 SXM, FP8)
Precision Impact
bf16
40.0 GB
weights/GPU
fp8
20.0 GB
weights/GPU
~1.1K tok/s
int4
10.0 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy InternLM 20B
Self-Hosted Infrastructure
Similar Models
InternLM3 8B
8B params · dense
Quality: 50
GigaChat 20B
20B params · dense
Quality: 50
Claude 3.5 Haiku
20B params · dense
Quality: 67
from $4.00/M
GPT-3.5 Turbo
20B params · dense
Quality: 67
from $1.50/M
InternLM 2.5 20B
19.9B params · dense
Quality: 50
from $0.50/M
Frequently Asked Questions
How much VRAM does InternLM 20B need for inference?
InternLM 20B requires approximately 40.0 GB of VRAM at BF16 precision, 20.0 GB at FP8, or 10.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (614400 bytes per token) and activations (~1.50 GB).
What is the best GPU for InternLM 20B?
The top recommended GPU for InternLM 20B is the H100 SXM using FP8 precision. It achieves approximately 1.1K tokens/sec at an estimated cost of $1794/month ($0.65/M tokens). Score: 100/100.
How much does InternLM 20B inference cost?
InternLM 20B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.