Skip to content
Updated minutes ago
nvidia

H100 PCIe

nvidia · hopper · 80 GB HBM3 · 350W TDP

VRAM

80 GB

BF16 TFLOPS

756

Bandwidth

2000 GB/s

From

$1.89/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM80 GB HBM3
Memory Bandwidth2000 GB/s
BF16 TFLOPS756
FP16 TFLOPS756
FP8 TFLOPS1513
INT8 TOPS1513
TDP350W
InterconnectPCIE
Max per Node8
PCIe Gen5
CUDA Compute Capability9
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
tensordock$2.59/hr-$1.89/hrCheapest
vast_ai$2.80/hr-$2.10/hr
lambda$2.29/hr--
runpod$3.09/hr-$2.39/hr

Compatible Models (278)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model2 GPUs1 GPU
13B model4 GPUs1 GPU
70B model17 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
0.78 t/s/WFP8
Qwen 2.5 7B
0.75 t/s/WFP8
Llama 3.1 8B
0.71 t/s/WFP8
Llama 3.1 70B
0.08 t/s/WFP8
Qwen 2.5 72B
0.08 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
H100 SXM80 GB9903350$1.89/hr
H100 NVL94 GB8353938$3.09/hr
H2096 GB1484000$0.99/hr
GH20096 GB9904000$2.99/hr
H200 SXM141 GB9904800$2.69/hr

Methodology Note

Performance estimates for the H100 PCIeare based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (80 GB HBM3 available), KV-cache allocation, and activation memory. Throughput predictions use the H100 PCIe's rated 2000 GB/s memory bandwidth and 756 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (hopper). See our full methodology.

Frequently Asked Questions

How many AI models can run on H100 PCIe?

The H100 PCIe can run 278 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x H100 PCIe.

What is the H100 PCIe inference throughput?

The H100 PCIe delivers 756 BF16 TFLOPS and 1513 FP8 TFLOPS with 2000 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does H100 PCIe cost per hour?

The H100 PCIe is available starting from $1.89/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.