ChatGLM4 9B
Zhipu AI · dense · 9.4B parameters · 131,072 context
Parameters
9.4B
Context Window
128K tokens
Architecture
Dense
Best GPU
A100 40GB SXM
Intelligence Brief
ChatGLM4 9B is a 9.4B parameter DENSE model from Zhipu AI, featuring Grouped Query Attention (GQA) with 40 layers and 4,096 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.
Architecture Details
Memory Requirements
BF16 Weights
18.8 GB
FP8 Weights
9.4 GB
INT4 Weights
4.7 GB
GPU Compatibility Matrix
ChatGLM4 9B is compatible with 90% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
95/100
score
Throughput
446.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$807
Cost/M Tokens
$0.69
BF16 · 1 GPU · vllm
95/100
score
Throughput
514.7 tok/s
Latency (ITL)
1.9ms
Est. TTFT
0ms
Cost/Month
$845
Cost/M Tokens
$0.62
BF16 · 1 GPU · vllm
95/100
score
Throughput
446.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$655
Cost/M Tokens
$0.56
Deployment Options
API Deployment
No API pricing available
Single GPU
A100 40GB SXM
$807/mo
Min VRAM: 9 GB
Multi-GPU
A4000 x2
205.1 tok/s
TP· $323/mo
API Pricing Comparison
No API pricing data available for this model.
Performance Estimates
Throughput by GPU
VRAM Breakdown (A100 40GB SXM, BF16)
Precision Impact
bf16
18.8 GB
weights/GPU
~446.6 tok/s
fp8
9.4 GB
weights/GPU
int4
4.7 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy ChatGLM4 9B
Similar Models
GLM-4 9B
9.4B params · dense
Quality: 50
from $0.15/M
Gemma 2 9B
9.2B params · dense
Quality: 68
from $0.10/M
Eagle 2 9B
9B params · dense
Quality: 65
Yi 1.5 9B
8.83B params · dense
Quality: 62
from $0.20/M
Yi Coder 9B
8.8B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does ChatGLM4 9B need for inference?
ChatGLM4 9B requires approximately 18.8 GB of VRAM at BF16 precision, 9.4 GB at FP8, or 4.7 GB at INT4 quantization. Additional VRAM is needed for KV-cache (20480 bytes per token) and activations (~0.80 GB).
What is the best GPU for ChatGLM4 9B?
The top recommended GPU for ChatGLM4 9B is the A100 40GB SXM using BF16 precision. It achieves approximately 446.6 tokens/sec at an estimated cost of $807/month ($0.69/M tokens). Score: 95/100.
How much does ChatGLM4 9B inference cost?
ChatGLM4 9B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.