Updated minutes ago
Llama 3.3 70B
Meta · dense · 70.6B parameters · 131,072 context
Quality84.0
Architecture Details
TypeDENSE
Total Parameters70.6B
Active Parameters70.6B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256
Memory Requirements
BF16 Weights
141.2 GB
FP8 Weights
70.6 GB
INT4 Weights
35.3 GB
KV-Cache per Token327680 bytes
Activation Estimate2.50 GB
Fits on (single-node)
B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM FP8H100 SXM INT4H100 PCIe INT4H100 NVL FP8
GPU Recommendations
H200 SXMoptimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
560.0 tok/s
Cost/Month
$2553
Cost/M Tokens
$1.73
H20optimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
474.0 tok/s
Cost/Month
$940
Cost/M Tokens
$0.75
GH200optimal
FP8 · 1 GPU · tensorrt-llm
100/100
score
Throughput
474.0 tok/s
Cost/Month
$2838
Cost/M Tokens
$2.28
API Pricing Comparison
| Provider | Input $/M | Output $/M | Badges |
|---|---|---|---|
| groq | $0.59 | $0.79 | Cheapest |
| together | $0.88 | $0.88 | |
| fireworks | $0.90 | $0.90 |
Quality Benchmarks
MMLU86.0
HumanEval60.0
GSM8K94.0
MT-Bench86.0
Capabilities
Features
✓ Tool Use✗ Vision✓ Code✓ Math✗ Reasoning✓ Multilingual✓ Structured Output
Supported Frameworks
vllmsglangtgitensorrt-llmollama
Supported Precisions
BF16 (default)FP8INT4