RWKV-6 14B
RWKV Foundation · hybrid · 14.1B parameters · 32,768 context
Parameters
14.1B
Context Window
32K tokens
Architecture
Dense
Best GPU
A100 40GB SXM
Cheapest API
$0.20/M
Intelligence Brief
RWKV-6 14B is a 14.1B parameter HYBRID model from RWKV Foundation, featuring Multi-Head Attention (MHA) with 61 layers and 5,120 hidden dimensions. With a 32,768 token context window, it supports code, multilingual. The most cost-effective API deployment is via rwkv at $0.20/M output tokens. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.
Architecture Details
Memory Requirements
BF16 Weights
28.2 GB
FP8 Weights
14.1 GB
INT4 Weights
7.0 GB
GPU Compatibility Matrix
RWKV-6 14B is compatible with 82% of GPU configurations across 41 GPUs at 3 precision levels.
GPU Recommendations
BF16 · 1 GPU · vllm
95/100
score
Throughput
268.0 tok/s
Latency (ITL)
3.7ms
Est. TTFT
1ms
Cost/Month
$807
Cost/M Tokens
$1.15
BF16 · 1 GPU · vllm
95/100
score
Throughput
132.3 tok/s
Latency (ITL)
7.6ms
Est. TTFT
1ms
Cost/Month
$465
Cost/M Tokens
$1.34
BF16 · 1 GPU · vllm
95/100
score
Throughput
119.9 tok/s
Latency (ITL)
8.3ms
Est. TTFT
1ms
Cost/Month
$399
Cost/M Tokens
$1.26
Deployment Options
API Deployment
rwkv
$0.20/M
output tokens
Single GPU
A100 40GB SXM
$807/mo
Min VRAM: 14 GB
Multi-GPU
RTX 3090 x2
267.0 tok/s
TP· $361/mo
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| rwkv | $0.20 | $0.20 | Cheapest |
Cost Analysis
| Provider | Input $/M | Output $/M | ~Monthly Cost |
|---|---|---|---|
| rwkvBest Value | $0.20 | $0.20 | $2 |
Cost per 1,000 Requests
Short (500 tok)
$0.14
via rwkv
Medium (2K tok)
$0.56
via rwkv
Long (8K tok)
$2.00
via rwkv
Performance Estimates
Throughput by GPU
VRAM Breakdown (A100 40GB SXM, BF16)
Precision Impact
bf16
28.2 GB
weights/GPU
~268.0 tok/s
fp8
14.1 GB
weights/GPU
int4
7.0 GB
weights/GPU
Capabilities
Features
Supported Frameworks
Supported Precisions
Where to Deploy RWKV-6 14B
Similar Models
Phi 3 Medium 14B
14B params · dense
Quality: 76
Nekomata 14B
14B params · dense
Quality: 50
Qwen 1.5 MoE A2.7B
14.3B params · moe
Quality: 50
Phi-4
14.7B params · dense
Quality: 73
from $0.14/M
Qwen 2.5 Coder 14B
14.7B params · dense
Quality: 50
from $0.30/M
Frequently Asked Questions
How much VRAM does RWKV-6 14B need for inference?
RWKV-6 14B requires approximately 28.2 GB of VRAM at BF16 precision, 14.1 GB at FP8, or 7.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (0 bytes per token) and activations (~1.00 GB).
What is the best GPU for RWKV-6 14B?
The top recommended GPU for RWKV-6 14B is the A100 40GB SXM using BF16 precision. It achieves approximately 268.0 tokens/sec at an estimated cost of $807/month ($1.15/M tokens). Score: 95/100.
How much does RWKV-6 14B inference cost?
RWKV-6 14B API inference starts from $0.20/M input tokens and $0.20/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.