Skip to content
Updated minutes ago
Meta

Llama 3.2 11B Vision

Meta · dense · 11B parameters · 131,072 context

Quality
50.0

Parameters

11B

Context Window

128K tokens

Architecture

Dense

Best GPU

A100 40GB SXM

Cheapest API

$0.18/M

Intelligence Brief

Llama 3.2 11B Vision is a 11B parameter DENSE model from Meta, featuring Grouped Query Attention (GQA) with 40 layers and 4,096 hidden dimensions. With a 131,072 token context window, it supports tools, vision, structured output, code, math, multilingual. The most cost-effective API deployment is via together at $0.18/M output tokens. For self-hosted inference, A100 40GB SXM delivers optimal throughput at $807/month.

Architecture Details

TypeDENSE
Total Parameters11B
Active Parameters11B
Layers40
Hidden Dimension4,096
Attention Heads32
KV Heads8
Head Dimension128
Vocab Size128,256

Memory Requirements

BF16 Weights

22.0 GB

FP8 Weights

11.0 GB

INT4 Weights

5.5 GB

KV-Cache per Token163840 bytes
Activation Estimate1.00 GB

GPU Compatibility Matrix

Llama 3.2 11B Vision is compatible with 89% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

A100 40GB SXMoptimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

381.7 tok/s

Latency (ITL)

2.6ms

Est. TTFT

0ms

Cost/Month

$807

Cost/M Tokens

$0.80

Use this config →
RTX 5090optimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

439.8 tok/s

Latency (ITL)

2.3ms

Est. TTFT

0ms

Cost/Month

$845

Cost/M Tokens

$0.73

Use this config →
A100 40GB PCIeoptimal

BF16 · 1 GPU · vllm

95/100

score

Throughput

381.7 tok/s

Latency (ITL)

2.6ms

Est. TTFT

0ms

Cost/Month

$655

Cost/M Tokens

$0.65

Use this config →

Deployment Options

API

API Deployment

together

$0.18/M

output tokens

Self-Hosted

Single GPU

A100 40GB SXM

$807/mo

Min VRAM: 11 GB

Scale

Multi-GPU

A4000 x2

178.0 tok/s

TP· $323/mo

API Pricing Comparison

ProviderInput $/MOutput $/MBadges
together$0.18$0.18
Cheapest
fireworks$0.20$0.20

Cost Analysis

ProviderInput $/MOutput $/M~Monthly Cost
togetherBest Value$0.18$0.18$2
fireworks$0.20$0.20$2

Cost per 1,000 Requests

Short (500 tok)

$0.13

via together

Medium (2K tok)

$0.50

via together

Long (8K tok)

$1.80

via together

Performance Estimates

Throughput by GPU

A100 40GB SXM
381.7 tok/s
RTX 5090
439.8 tok/s
A100 40GB PCIe
381.7 tok/s

VRAM Breakdown (A100 40GB SXM, BF16)

Weights
Act
Weights 22.0 GBKV-Cache 2.7 GBActivations 8.0 GBOverhead 1.8 GB

Precision Impact

bf16

22.0 GB

weights/GPU

~381.7 tok/s

fp8

11.0 GB

weights/GPU

int4

5.5 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgitensorrt-llmollama

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Llama 3.2 11B Vision

Similar Models

SOLAR 10.7B

10.7B params · dense

Quality: 50

from $0.30/M

Smaller context, More expensiveCompare →
Larger contextCompare →

Gemma 3 12B

12B params · dense

Quality: 71

from $0.10/M

Higher quality, CheaperCompare →

Frequently Asked Questions

How much VRAM does Llama 3.2 11B Vision need for inference?

Llama 3.2 11B Vision requires approximately 22.0 GB of VRAM at BF16 precision, 11.0 GB at FP8, or 5.5 GB at INT4 quantization. Additional VRAM is needed for KV-cache (163840 bytes per token) and activations (~1.00 GB).

What is the best GPU for Llama 3.2 11B Vision?

The top recommended GPU for Llama 3.2 11B Vision is the A100 40GB SXM using BF16 precision. It achieves approximately 381.7 tokens/sec at an estimated cost of $807/month ($0.80/M tokens). Score: 95/100.

How much does Llama 3.2 11B Vision inference cost?

Llama 3.2 11B Vision API inference starts from $0.18/M input tokens and $0.18/M output tokens. Self-hosted inference costs depend on your GPU configuration — use our ROI calculator for a detailed breakdown.