Skip to content
Updated minutes ago
nvidia

H100 NVL 94GB (per GPU pair)

nvidia · hopper · 188 GB HBM3 · 800W TDP

VRAM

188 GB

BF16 TFLOPS

1670

Bandwidth

7876 GB/s

From

$5.49/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM188 GB HBM3
Memory Bandwidth7876 GB/s
BF16 TFLOPS1670
FP16 TFLOPS1670
FP8 TFLOPS3341
INT8 TOPS3341
TDP800W
InterconnectNVLINK
NVLink Bandwidth600 GB/s
Max per Node4
PCIe Gen5
CUDA Compute Capability9
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
coreweave$7.49/hr$5.49/hr-Cheapest
runpod$7.89/hr--

Compatible Models (278)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model1 GPU1 GPU
13B model2 GPUs1 GPU
70B model8 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
1.35 t/s/WFP8
Qwen 2.5 7B
1.30 t/s/WFP8
Llama 3.1 8B
1.23 t/s/WFP8
Llama 3.1 70B
0.14 t/s/WFP8
Qwen 2.5 72B
0.14 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
H200 SXM141 GB9904800$2.69/hr
H2096 GB1484000$0.99/hr
GH20096 GB9904000$2.99/hr
H100 NVL94 GB8353938$3.09/hr
H100 SXM80 GB9903350$1.89/hr

Methodology Note

Performance estimates for the H100 NVL 94GB (per GPU pair)are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (188 GB HBM3 available), KV-cache allocation, and activation memory. Throughput predictions use the H100 NVL 94GB (per GPU pair)'s rated 7876 GB/s memory bandwidth and 1670 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (hopper). See our full methodology.

Frequently Asked Questions

How many AI models can run on H100 NVL 94GB (per GPU pair)?

The H100 NVL 94GB (per GPU pair) can run 278 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 4x H100 NVL 94GB (per GPU pair).

What is the H100 NVL 94GB (per GPU pair) inference throughput?

The H100 NVL 94GB (per GPU pair) delivers 1670 BF16 TFLOPS and 3341 FP8 TFLOPS with 7876 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does H100 NVL 94GB (per GPU pair) cost per hour?

The H100 NVL 94GB (per GPU pair) is available starting from $5.49/hour via coreweave. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.