GLM-4 9B
Zhipu AI · dense · 9.4B parameters · 131,072 context
Parameters
9.4B
Context Window
128K tokens
Architecture
Dense
Best GPU
A100 40GB SXM
Cheapest API
$0.15/M
Intelligence Brief
GLM-4 9B is a 9.4B parameter DENSE model from Zhipu AI, featuring Grouped Query Attention (GQA) with 40 layers and 4,096 hidden dimensions. With a 131,072 token context window, it supports tools, structured output, code, math, multilingual. The most cost-effective API deployment is via zhipu at $0.15/M output tokens. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.
Architecture Details
Memory Requirements
BF16 Weights
18.8 GB
FP8 Weights
9.4 GB
INT4 Weights
4.7 GB
GPU Compatibility Matrix
GLM-4 9B is compatible with 90% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
95/100
score
Throughput
446.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$807
Cost/M Tokens
$0.69
BF16 · 1 GPU · vllm
95/100
score
Throughput
514.7 tok/s
Latency (ITL)
1.9ms
Est. TTFT
0ms
Cost/Month
$845
Cost/M Tokens
$0.62
BF16 · 1 GPU · vllm
95/100
score
Throughput
446.6 tok/s
Latency (ITL)
2.2ms
Est. TTFT
0ms
Cost/Month
$655
Cost/M Tokens
$0.56
Deployment Options
API Deployment
zhipu
$0.15/M
output tokens
Single GPU
A100 40GB SXM
$807/mo
Min VRAM: 9 GB
Multi-GPU
A4000 x2
205.1 tok/s
TP· $323/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| zhipu | $0.15 | $0.15 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| zhipuBest Value | $0.15 | $0.15 | $2 |
Cost per 1,000 Requests
Short (500 tok)
$0.10
via zhipu
Medium (2K tok)
$0.42
via zhipu
Long (8K tok)
$1.50
via zhipu
Performance Estimates
Throughput by GPU
VRAM Breakdown (A100 40GB SXM, BF16)
Precision Impact
bf16
18.8 GB
weights/GPU
~446.6 tok/s
fp8
9.4 GB
weights/GPU
int4
4.7 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy GLM-4 9B
Similar Models
ChatGLM4 9B
9.4B params · dense
Quality: 50
Gemma 2 9B
9.2B params · dense
Quality: 68
from $0.10/M
Eagle 2 9B
9B params · dense
Quality: 65
Yi 1.5 9B
8.83B params · dense
Quality: 62
from $0.20/M
Yi Coder 9B
8.8B params · dense
Quality: 50
Frequently Asked Questions
How much VRAM does GLM-4 9B need for inference?
GLM-4 9B requires approximately 18.8 GB of VRAM at BF16 precision, 9.4 GB at FP8, or 4.7 GB at INT4 quantization. Additional VRAM is needed for KV-cache (40960 bytes per token) and activations (~1.00 GB).
What is the best GPU for GLM-4 9B?
The top recommended GPU for GLM-4 9B is the A100 40GB SXM using BF16 precision. It achieves approximately 446.6 tokens/sec at an estimated cost of $807/month ($0.69/M tokens). Score: 95/100.
How much does GLM-4 9B inference cost?
GLM-4 9B API inference starts from $0.15/M input tokens and $0.15/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.