Skip to content
Updated minutes ago
nvidia

B100 SXM

nvidia · blackwell · 192 GB HBM3e · 700W TDP

VRAM

192 GB

BF16 TFLOPS

1750

Bandwidth

8000 GB/s

From

$4.50/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM192 GB HBM3e
Memory Bandwidth8000 GB/s
BF16 TFLOPS1750
FP16 TFLOPS1750
FP8 TFLOPS3500
INT8 TOPS3500
TDP700W
InterconnectNVLINK
NVLink Bandwidth1800 GB/s
Max per Node8
PCIe Gen6
CUDA Compute Capability10
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
coreweave$6.00/hr$4.50/hr-Cheapest
lambda$4.99/hr--

Compatible Models (280)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model1 GPU1 GPU
13B model2 GPUs1 GPU
70B model7 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
1.57 t/s/WFP8
Qwen 2.5 7B
1.50 t/s/WFP8
Llama 3.1 8B
1.42 t/s/WFP8
DeepSeek V3
0.31 t/s/WFP8
Llama 3.1 70B
0.16 t/s/WFP8
Qwen 2.5 72B
0.16 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
GB200 NVL72 (per GPU)192 GB22508000$6.50/hr
GB300 NVL72 (per GPU)192 GB25008000$7.50/hr
B200 SXM180 GB22508000$4.49/hr
B300288 GB280012000$0.00/hr
RTX 509032 GB2101792$0.89/hr

Methodology Note

Performance estimates for the B100 SXMare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (192 GB HBM3e available), KV-cache allocation, and activation memory. Throughput predictions use the B100 SXM's rated 8000 GB/s memory bandwidth and 1750 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (blackwell). See our full methodology.

Frequently Asked Questions

How many AI models can run on B100 SXM?

The B100 SXM can run 280 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x B100 SXM.

What is the B100 SXM inference throughput?

The B100 SXM delivers 1750 BF16 TFLOPS and 3500 FP8 TFLOPS with 8000 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does B100 SXM cost per hour?

The B100 SXM is available starting from $4.50/hour via coreweave. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.