Skip to content
Updated minutes ago
nvidia

H20

nvidia · hopper · 96 GB HBM3 · 500W TDP

VRAM

96 GB

BF16 TFLOPS

148

Bandwidth

4000 GB/s

From

$0.99/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM96 GB HBM3
Memory Bandwidth4000 GB/s
BF16 TFLOPS148
FP16 TFLOPS148
FP8 TFLOPS296
INT8 TOPS296
TDP500W
InterconnectPCIE
Max per Node8
PCIe Gen5
CUDA Compute Capability9
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
tensordock$1.39/hr-$0.99/hrCheapest
vast_ai$1.50/hr-$1.10/hr

Compatible Models (278)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model2 GPUs1 GPU
13B model3 GPUs1 GPU
70B model14 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
1.10 t/s/WFP8
Qwen 2.5 7B
1.05 t/s/WFP8
Llama 3.1 8B
1.00 t/s/WFP8
Llama 3.1 70B
0.11 t/s/WFP8
Qwen 2.5 72B
0.11 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
GH20096 GB9904000$2.99/hr
H100 NVL94 GB8353938$3.09/hr
H100 SXM80 GB9903350$1.89/hr
H100 PCIe80 GB7562000$1.89/hr
H200 SXM141 GB9904800$2.69/hr

Methodology Note

Performance estimates for the H20are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (96 GB HBM3 available), KV-cache allocation, and activation memory. Throughput predictions use the H20's rated 4000 GB/s memory bandwidth and 148 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (hopper). See our full methodology.

Frequently Asked Questions

How many AI models can run on H20?

The H20 can run 278 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x H20.

What is the H20 inference throughput?

The H20 delivers 148 BF16 TFLOPS and 296 FP8 TFLOPS with 4000 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does H20 cost per hour?

The H20 is available starting from $0.99/hour via tensordock. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.