H100 SXM vs B200 SXM
Side-by-side comparison of the NVIDIA H100 SXM and the NVIDIA B200 SXM for AI inference workloads.
Specifications
| Spec | H100 SXM | B200 SXM |
|---|---|---|
| Generation | hopper | blackwell |
| Memory Type | HBM3 | HBM3e |
| VRAM | 80 GB | 180 GB |
| Memory Bandwidth | 3,350 GB/s | 8,000 GB/s |
| BF16 TFLOPS | 990 | 2,250 |
| FP16 TFLOPS | 990 | 2,250 |
| FP8 TFLOPS | 1,979 | 4,500 |
| INT8 TOPS | 1,979 | 4,500 |
| TDP | 700 W | 1,000 W |
| Interconnect | nvlink | nvlink |
| NVLink Bandwidth | 900 GB/s | 1,800 GB/s |
| Max GPUs per Node | 8 | 8 |
| PCIe Gen | Gen 5 | Gen 6 |
| CUDA Compute Capability | 9 | 10 |
Pricing
H100 SXM
| Provider | On-Demand | Reserved | Spot |
|---|---|---|---|
| runpod | $4.18/hr | - | $3.29/hr |
| lambda | $2.49/hr | $1.89/hr | - |
| coreweave | $3.79/hr | $2.57/hr | - |
| aws | $5.12/hr | $3.59/hr | - |
| gcp | $4.85/hr | $3.40/hr | - |
| azure | $4.98/hr | $3.49/hr | - |
| vast ai | $3.40/hr | - | $2.50/hr |
| tensordock | $3.29/hr | - | $2.49/hr |
| fluidstack | $2.85/hr | - | $2.10/hr |
B200 SXM
| Provider | On-Demand | Reserved | Spot |
|---|---|---|---|
| coreweave | $7.50/hr | $5.50/hr | - |
| lambda | $5.99/hr | $4.49/hr | - |
| runpod | $7.20/hr | - | - |
Cheapest available rate: H100 SXM at $1.89/hr vs B200 SXM at $4.49/hr — H100 SXM is +138% cheaper
Efficiency Metrics
TFLOPS / Watt
1.4
H100 SXM
2.3
B200 SXM
BF16
VRAM / Dollar
42.3
H100 SXM
40.1
B200 SXM
GB/$/hr
Bandwidth / Watt
4.8
H100 SXM
8.0
B200 SXM
GB/s/W
Models (FP16, 1 GPU)
182.0
H100 SXM
220.0
B200 SXM
Model Compatibility (FP16, Single GPU)
Only on H100 SXM (0)
None
Both (182)
- Yi 1.5 34B
- Yi 1.5 9B
- Yi Coder 9B
- GTE Qwen2 7B
- Marco O1
- Qwen 1.5 MoE A2.7B
- Qwen 2 Audio 7B
- Qwen 2.5 14B
- Qwen 2.5 32B
- Qwen 2.5 3B
- Qwen 2.5 Coder 32B
- OLMo 2 13B
- OLMo 2 7B
- Amazon Nova Lite
- OpenELM 3B
- BGE Large EN v1.5
- BGE M3
- Baichuan 2 13B
- OctoCoder 15B
- StarCoder2 15B
- +162 more
Only on B200 SXM (38)
- Jamba 1.5 Mini
- Amazon Nova Pro
- Code Llama 70B
- Dolphin 2.9 72B
- DeepSeek R1 Distill 70B
- Falcon 40B
- Llama 3 70B 1M Context
- Llama 2 70B
- Llama 3 70B
- Llama 3.1 70B
- Llama 3.3 70B
- WizardMath 70B
- Mixtral 8x7B
- Hermes 3 70B
- HelpSteer2 Llama 3.1 70B
- Llama 3.1 Nemotron 51B
- Llama 3.1 Nemotron 70B Instruct
- Llama 3.1 Nemotron 70B Reward
- Nemotron 70B
- VILA 1.5 40B
- +18 more
Summary
The H100 SXM (hopper generation) offers 80GB of HBM3 with 990 BF16 TFLOPS and 3,350 GB/s memory bandwidth at 700W TDP.
The B200 SXM (blackwell generation) offers 180GB of HBM3e with 2,250 BF16 TFLOPS and 8,000 GB/s memory bandwidth at 1000W TDP.
The B200 SXM has +125% more VRAM, allowing it to run larger models without multi-GPU setups.
From a cost perspective, the H100 SXM is more affordable at $1.89/hr vs $4.49/hr for the B200 SXM.