Skip to content

B200 SXM vs B100 SXM

Side-by-side comparison of the NVIDIA B200 SXM and the NVIDIA B100 SXM for AI inference workloads.

Specifications

SpecB200 SXMB100 SXM
Generationblackwellblackwell
Memory TypeHBM3eHBM3e
VRAM180 GB192 GB
Memory Bandwidth8,000 GB/s8,000 GB/s
BF16 TFLOPS2,2501,750
FP16 TFLOPS2,2501,750
FP8 TFLOPS4,5003,500
INT8 TOPS4,5003,500
TDP1,000 W700 W
Interconnectnvlinknvlink
NVLink Bandwidth1,800 GB/s1,800 GB/s
Max GPUs per Node88
PCIe GenGen 6Gen 6
CUDA Compute Capability1010

Pricing

B200 SXM

ProviderOn-DemandReservedSpot
coreweave$7.50/hr$5.50/hr-
lambda$5.99/hr$4.49/hr-
runpod$7.20/hr--

B100 SXM

ProviderOn-DemandReservedSpot
coreweave$6.00/hr$4.50/hr-
lambda$4.99/hr--

Cheapest available rate: B200 SXM at $4.49/hr vs B100 SXM at $4.50/hrB200 SXM is +0% cheaper

Efficiency Metrics

TFLOPS / Watt

2.3

B200 SXM

vs

2.5

B100 SXM

BF16

VRAM / Dollar

40.1

B200 SXM

vs

42.7

B100 SXM

GB/$/hr

Bandwidth / Watt

8.0

B200 SXM

vs

11.4

B100 SXM

GB/s/W

Models (FP16, 1 GPU)

220.0

B200 SXM

vs

221.0

B100 SXM

Model Compatibility (FP16, Single GPU)

Only on B200 SXM (0)

None

Both (220)

  • Yi 1.5 34B
  • Yi 1.5 9B
  • Yi Coder 9B
  • Jamba 1.5 Mini
  • GTE Qwen2 7B
  • Marco O1
  • Qwen 1.5 MoE A2.7B
  • Qwen 2 Audio 7B
  • Qwen 2.5 14B
  • Qwen 2.5 32B
  • Qwen 2.5 3B
  • Qwen 2.5 Coder 32B
  • OLMo 2 13B
  • OLMo 2 7B
  • Amazon Nova Lite
  • Amazon Nova Pro
  • OpenELM 3B
  • BGE Large EN v1.5
  • BGE M3
  • Baichuan 2 13B
  • +200 more

Only on B100 SXM (1)

  • Llama 3.2 90B Vision Instruct

Summary

The B200 SXM (blackwell generation) offers 180GB of HBM3e with 2,250 BF16 TFLOPS and 8,000 GB/s memory bandwidth at 1000W TDP.

The B100 SXM (blackwell generation) offers 192GB of HBM3e with 1,750 BF16 TFLOPS and 8,000 GB/s memory bandwidth at 700W TDP.

The B100 SXM has +7% more VRAM, allowing it to run larger models without multi-GPU setups.

From a cost perspective, the B200 SXM is more affordable at $4.49/hr vs $4.50/hr for the B100 SXM.

More GPU Comparisons