Skip to content
Updated minutes ago
other

Cloud AI 100

other · other · 32 GB LPDDR4X · 75W TDP

VRAM

32 GB

BF16 TFLOPS

150

Bandwidth

134 GB/s

From

$0.00/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM32 GB LPDDR4X
Memory Bandwidth134 GB/s
BF16 TFLOPS150
FP16 TFLOPS150
FP8 TFLOPS300
INT8 TOPS400
TDP75W
InterconnectPCIE
Max per Node8
PCIe Gen4
Tensor CoresNo

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
qualcomm$0.00/hr--Cheapest

Compatible Models (260)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model5 GPUs1 GPU
13B model8 GPUs1 GPU
70B model42 GPUs2 GPUs

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.24 t/s/WFP8
Qwen 2.5 7B
0.24 t/s/WFP8
Llama 3.1 8B
0.22 t/s/WFP8
Llama 3.1 70B
0.03 t/s/WFP8
Qwen 2.5 72B
0.02 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
Trainium296 GB7563200$1.95/hr
Groq LPU230 GB18880000$0.00/hr
RTX 509032 GB2101792$0.89/hr
V100 32GB32 GB28.3900$0.19/hr
Instinct MI10032 GB184.61229$0.40/hr

Methodology Note

Performance estimates for the Cloud AI 100are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (32 GB LPDDR4X available), KV-cache allocation, and activation memory. Throughput predictions use the Cloud AI 100's rated 134 GB/s memory bandwidth and 150 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (other). See our full methodology.

Frequently Asked Questions

How many AI models can run on Cloud AI 100?

The Cloud AI 100 can run 260 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x Cloud AI 100.

What is the Cloud AI 100 inference throughput?

The Cloud AI 100 delivers 150 BF16 TFLOPS and 300 FP8 TFLOPS with 134 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does Cloud AI 100 cost per hour?

The Cloud AI 100 is available starting from $0.00/hour via qualcomm. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.