Skip to content
Updated minutes ago
Alibaba

Qwen 1.5 MoE A2.7B

Alibaba · moe · 14.3B parameters · 32,768 context

Quality
50.0

Architecture Details

TypeMOE
Total Parameters14.3B
Active Parameters2.7B
Layers24
Hidden Dimension2,048
Attention Heads16
KV Heads16
Head Dimension128
Vocab Size151,936
Total Experts60
Active Experts4

Memory Requirements

BF16 Weights

28.6 GB

FP8 Weights

14.3 GB

INT4 Weights

7.2 GB

KV-Cache per Token196608 bytes
Activation Estimate0.50 GB

Fits on (single-node)

B200 SXM BF16B100 SXM BF16GB200 NVL72 (per GPU) BF16GB300 NVL72 (per GPU) BF16H200 SXM BF16H100 SXM BF16H100 PCIe BF16H100 NVL BF16

GPU Recommendations

A100 40GB SXMoptimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

1.1K tok/s

Cost/Month

$807

Cost/M Tokens

$0.29

Use this config →
RTX A6000optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

768.0 tok/s

Cost/Month

$465

Cost/M Tokens

$0.23

Use this config →
A40optimal

BF16 · 1 GPU · vllm

100/100

score

Throughput

696.0 tok/s

Cost/Month

$399

Cost/M Tokens

$0.22

Use this config →

API Pricing Comparison

No API pricing data available for this model.

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgi

Supported Precisions

BF16 (default)FP8INT4

Similar Models