H100 SXM
nvidia · hopper · 80 GB HBM3 · 700W TDP
VRAM
80 GB
BF16 TFLOPS
990
Bandwidth
3350 GB/s
From
$1.89/hr
Spec Sheet
Pricing by Provider
| Provider | On-Demand | Reserved | Spot | Badge |
|---|---|---|---|---|
| lambda | $2.49/hr | $1.89/hr | - | Cheapest |
| fluidstack | $2.85/hr | - | $2.10/hr | |
| tensordock | $3.29/hr | - | $2.49/hr | |
| vast_ai | $3.40/hr | - | $2.50/hr | |
| coreweave | $3.79/hr | $2.57/hr | - | |
| runpod | $4.18/hr | - | $3.29/hr | |
| gcp | $4.85/hr | $3.40/hr | - | |
| azure | $4.98/hr | $3.49/hr | - | |
| aws | $5.12/hr | $3.59/hr | - |
Pricing History
Compatible Models (278)
Single GPU (205 models)
Multi-GPU (73 models)
Training Capabilities
Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA
| Model Size | Full Fine-Tune | QLoRA |
|---|---|---|
| 7B model | 2 GPUs | 1 GPU |
| 13B model | 4 GPUs | 1 GPU |
| 70B model | 17 GPUs | 1 GPU |
Energy Efficiency
Estimated tokens/second per Watt for popular models
Similar GPUs
Methodology Note
Performance estimates for the H100 SXMare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (80 GB HBM3 available), KV-cache allocation, and activation memory. Throughput predictions use the H100 SXM's rated 3350 GB/s memory bandwidth and 990 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (hopper). See our full methodology.
Frequently Asked Questions
How many AI models can run on H100 SXM?
The H100 SXM can run 278 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x H100 SXM.
What is the H100 SXM inference throughput?
The H100 SXM delivers 990 BF16 TFLOPS and 1979 FP8 TFLOPS with 3350 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.
How much does H100 SXM cost per hour?
The H100 SXM is available starting from $1.89/hour via lambda. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.