Skip to content
Updated minutes ago
nvidia

H100 SXM

nvidia · hopper · 80 GB HBM3 · 700W TDP

VRAM

80 GB

BF16 TFLOPS

990

Bandwidth

3350 GB/s

From

$1.89/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM80 GB HBM3
Memory Bandwidth3350 GB/s
BF16 TFLOPS990
FP16 TFLOPS990
FP8 TFLOPS1979
INT8 TOPS1979
TDP700W
InterconnectNVLINK
NVLink Bandwidth900 GB/s
Max per Node8
PCIe Gen5
CUDA Compute Capability9
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
lambda$2.49/hr$1.89/hr-Cheapest
fluidstack$2.85/hr-$2.10/hr
tensordock$3.29/hr-$2.49/hr
vast_ai$3.40/hr-$2.50/hr
coreweave$3.79/hr$2.57/hr-
runpod$4.18/hr-$3.29/hr
gcp$4.85/hr$3.40/hr-
azure$4.98/hr$3.49/hr-
aws$5.12/hr$3.59/hr-

Pricing History

runpod
$4.18/hr 0.0%
2024-01-012025-03-01
Low: $4.18High: $5.50
lambda
$2.49/hr 0.0%
2024-01-012025-03-01
Low: $2.49High: $3.29
coreweave
$3.85/hr 0.0%
2024-01-012025-03-01
Low: $3.85High: $4.76

Compatible Models (249)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model2 GPUs1 GPU
13B model4 GPUs1 GPU
70B model17 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.66 t/s/WFP8
Qwen 2.5 7B
0.63 t/s/WFP8
Llama 3.1 8B
0.60 t/s/WFP8
Llama 3.1 70B
0.07 t/s/WFP8
Qwen 2.5 72B
0.07 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
H100 PCIe80 GB7562000$1.89/hr
H100 NVL94 GB8353938$3.09/hr
H2096 GB1484000$0.99/hr
GH20096 GB9904000$2.99/hr
H200 SXM141 GB9904800$2.69/hr