Skip to content
Updated minutes ago
nvidia

B200 SXM

nvidia · blackwell · 180 GB HBM3e · 1000W TDP

VRAM

180 GB

BF16 TFLOPS

2250

Bandwidth

8000 GB/s

From

$4.49/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM180 GB HBM3e
Memory Bandwidth8000 GB/s
BF16 TFLOPS2250
FP16 TFLOPS2250
FP8 TFLOPS4500
INT8 TOPS4500
TDP1000W
InterconnectNVLINK
NVLink Bandwidth1800 GB/s
Max per Node8
PCIe Gen6
CUDA Compute Capability10
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
lambda$5.99/hr$4.49/hr-Cheapest
coreweave$7.50/hr$5.50/hr-
runpod$7.20/hr--

Compatible Models (280)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model1 GPU1 GPU
13B model2 GPUs1 GPU
70B model8 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
1.10 t/s/WFP8
Qwen 2.5 7B
1.05 t/s/WFP8
Llama 3.1 8B
1.00 t/s/WFP8
DeepSeek V3
0.22 t/s/WFP8
Llama 3.1 70B
0.11 t/s/WFP8
Qwen 2.5 72B
0.11 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
B100 SXM192 GB17508000$4.50/hr
GB200 NVL72 (per GPU)192 GB22508000$6.50/hr
GB300 NVL72 (per GPU)192 GB25008000$7.50/hr
B300288 GB280012000$0.00/hr
RTX 509032 GB2101792$0.89/hr

Methodology Note

Performance estimates for the B200 SXMare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (180 GB HBM3e available), KV-cache allocation, and activation memory. Throughput predictions use the B200 SXM's rated 8000 GB/s memory bandwidth and 2250 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (blackwell). See our full methodology.

Frequently Asked Questions

How many AI models can run on B200 SXM?

The B200 SXM can run 280 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x B200 SXM.

What is the B200 SXM inference throughput?

The B200 SXM delivers 2250 BF16 TFLOPS and 4500 FP8 TFLOPS with 8000 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does B200 SXM cost per hour?

The B200 SXM is available starting from $4.49/hour via lambda. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.