RunPod offers 10 GPU configurations with prices starting at $0.58/hour. Compared to the market average of $8.72/hour across all cloud GPU providers, RunPod's entry-level pricing is 93% below average. With autoscaling support, it is well-suited for variable inference workloads.
Provider Overview
Type
cloud
Billing
Per second
Egress
Free
SLA Uptime
99.9%
Autoscaling
Yes
Cold Start
200ms
Storage
$0.1/GB/mo
GPU Offerings (10)
| GPU | $/hr | Tier | Availability | Regions | |
|---|---|---|---|---|---|
| nvidia-a4000 | $0.58 | on demand | medium | us-east | Calculate → |
| nvidia-l4 | $0.69 | on demand | high | us-east, us-west | Calculate → |
| nvidia-rtx-4090 | $1.10 | on demand | high | us-east, us-west, eu-west | Calculate → |
| nvidia-a6000 | $1.22 | on demand | high | us-east, us-west | Calculate → |
| nvidia-rtx-5090 | $1.58 | on demand | low | us-east | Calculate → |
| nvidia-l40s | $1.90 | on demand | high | us-east, us-west | Calculate → |
| nvidia-a100-80gb-sxm | $2.72 | on demand | high | us-east, us-west, eu-west | Calculate → |
| nvidia-h100-sxm | $4.18 | on demand | high | us-east, us-west, eu-west | Calculate → |
| nvidia-h200 | $5.58 | on demand | high | us-east, us-west, eu-west | Calculate → |
| nvidia-b200 | $8.64 | on demand | medium | us-east, eu-west | Calculate → |
Pricing History
nvidia-h100-sxm via runpod
-24.0% overall
2024-01-01$4.18/hr2025-03-01
nvidia-a100-80gb-sxm via runpod
-30.1% overall
2024-01-01$2.72/hr2025-03-01
nvidia-l40s via runpod
-32.0% overall
2024-01-01$1.49/hr2025-03-01
Reputation Details
Pricing
50
Reliability
90
Features
75
Highlights
- 99.9%+ SLA
- Autoscaling supported
- Fast cold start
Compare with Others
| Provider | Overall | Pricing | Reliability | Features | GPUs |
|---|---|---|---|---|---|
| RunPod | 70 | 50 | 90 | 75 | 10 |
| Amazon Web Services | 67 | 50 | 90 | 65 | 13 |
| Google Cloud Platform | 67 | 50 | 90 | 65 | 10 |
| Microsoft Azure | 67 | 50 | 90 | 65 | 9 |
| Lambda Labs | 62 | 50 | 90 | 50 | 8 |
Embed Badge
<a href="https://inferencebench.io/providers/runpod/"><img src="data:image/svg+xml,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22221%22%20height%3D%2220%22%20role%3D%22img%22%20aria-label%3D%22InferenceBench%20Verified%3A%20RunPod%22%3E%0A%20%20%3Ctitle%3EInferenceBench%20Verified%3A%20RunPod%3C%2Ftitle%3E%0A%20%20%3ClinearGradient%20id%3D%22s%22%20x2%3D%220%22%20y2%3D%22100%25%22%3E%0A%20%20%20%20%3Cstop%20offset%3D%220%22%20stop-color%3D%22%23bbb%22%20stop-opacity%3D%22.1%22%2F%3E%0A%20%20%20%20%3Cstop%20offset%3D%221%22%20stop-opacity%3D%22.1%22%2F%3E%0A%20%20%3C%2FlinearGradient%3E%0A%20%20%3CclipPath%20id%3D%22r%22%3E%0A%20%20%20%20%3Crect%20width%3D%22221%22%20height%3D%2220%22%20rx%3D%223%22%20fill%3D%22%23fff%22%2F%3E%0A%20%20%3C%2FclipPath%3E%0A%20%20%3Cg%20clip-path%3D%22url(%23r)%22%3E%0A%20%20%20%20%3Crect%20width%3D%22166%22%20height%3D%2220%22%20fill%3D%22%23333%22%2F%3E%0A%20%20%20%20%3Crect%20x%3D%22166%22%20width%3D%2255%22%20height%3D%2220%22%20fill%3D%22%238b5cf6%22%2F%3E%0A%20%20%20%20%3Crect%20width%3D%22221%22%20height%3D%2220%22%20fill%3D%22url(%23s)%22%2F%3E%0A%20%20%3C%2Fg%3E%0A%20%20%3Cg%20fill%3D%22%23fff%22%20text-anchor%3D%22middle%22%20font-family%3D%22Verdana%2CGeneva%2CDejaVu%20Sans%2Csans-serif%22%20text-rendering%3D%22geometricPrecision%22%20font-size%3D%2211%22%3E%0A%20%20%20%20%3Ctext%20aria-hidden%3D%22true%22%20x%3D%2283%22%20y%3D%2214%22%20fill%3D%22%23010101%22%20fill-opacity%3D%22.3%22%3EInferenceBench%20Verified%3C%2Ftext%3E%0A%20%20%20%20%3Ctext%20x%3D%2283%22%20y%3D%2213%22%3EInferenceBench%20Verified%3C%2Ftext%3E%0A%20%20%20%20%3Ctext%20aria-hidden%3D%22true%22%20x%3D%22193.5%22%20y%3D%2214%22%20fill%3D%22%23010101%22%20fill-opacity%3D%22.3%22%3ERunPod%3C%2Ftext%3E%0A%20%20%20%20%3Ctext%20x%3D%22193.5%22%20y%3D%2213%22%3ERunPod%3C%2Ftext%3E%0A%20%20%3C%2Fg%3E%0A%3C%2Fsvg%3E" alt="InferenceBench Verified — RunPod" /></a>