Skip to content
Updated minutes ago
Meta

Llama 3.2 90B Vision

Meta · dense · 90B parameters · 131,072 context

Quality
50.0

Architecture Details

TypeDENSE
Total Parameters90B
Active Parameters90B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size128,256

Memory Requirements

BF16 Weights

180.0 GB

FP8 Weights

90.0 GB

INT4 Weights

45.0 GB

KV-Cache per Token327680 bytes
Activation Estimate3.00 GB

Fits on (single-node)

B200 SXM FP8B100 SXM FP8GB200 NVL72 (per GPU) FP8GB300 NVL72 (per GPU) FP8H200 SXM FP8H100 SXM INT4H100 PCIe INT4H100 NVL INT4

GPU Recommendations

B200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Cost/Month

$4261

Cost/M Tokens

$2.90

Use this config →
B100 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Cost/Month

$4271

Cost/M Tokens

$2.90

Use this config →
GB200 NVL72 (per GPU)optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Cost/Month

$6169

Cost/M Tokens

$4.19

Use this config →

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
fireworks$0.90$0.90
Cheapest
together$1.20$1.20

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llm

Supported Precisions

BF16 (default)FP8INT4

Similar Models