Skip to content
Updated minutes ago
nvidia

B300

nvidia · blackwell · 288 GB HBM3e · 1200W TDP

VRAM

288 GB

BF16 TFLOPS

2800

Bandwidth

12000 GB/s

From

$0.00/hr

Calculate ROI with this GPU →

Spec Sheet

VRAM288 GB HBM3e
Memory Bandwidth12000 GB/s
BF16 TFLOPS2800
FP16 TFLOPS2800
FP8 TFLOPS5600
INT8 TOPS5600
TDP1200W
InterconnectNVLINK
NVLink Bandwidth1800 GB/s
Max per Node8
PCIe Gen6
CUDA Compute Capability10
Tensor CoresYes

Pricing by Provider

ProviderOn-DemandReservedSpotBadge
nvidia$0.00/hr--Cheapest

Compatible Models (280)

Training Capabilities

Estimated GPU count for full fine-tuning (AdamW, BF16) and QLoRA

Model SizeFull Fine-TuneQLoRA
7B model1 GPU1 GPU
13B model1 GPU1 GPU
70B model5 GPUs1 GPU

Energy Efficiency

Estimated tokens/second per Watt for popular models

Mistral 7B
1.37 t/s/WFP8
Qwen 2.5 7B
1.32 t/s/WFP8
Llama 3.1 8B
1.25 t/s/WFP8
DeepSeek V3
0.27 t/s/WFP8
Llama 3.1 70B
0.14 t/s/WFP8
Qwen 2.5 72B
0.14 t/s/WFP8

Similar GPUs

GPUVRAMBF16 TFLOPSBW (GB/s)From
B200 NVL (pair)360 GB450016000$10.50/hr
B100 SXM192 GB17508000$4.50/hr
GB200 NVL72 (per GPU)192 GB22508000$6.50/hr
GB300 NVL72 (per GPU)192 GB25008000$7.50/hr
B200 SXM180 GB22508000$4.49/hr

Methodology Note

Performance estimates for the B300are based on InferenceBench's roofline performance model with CUDA kernel-level optimization including FlashAttention v2 and PagedAttention. Memory calculations account for model weights (288 GB HBM3e available), KV-cache allocation, and activation memory. Throughput predictions use the B300's rated 12000 GB/s memory bandwidth and 2800 BF16 TFLOPS compute capacity as roofline ceilings, with empirical correction factors per GPU architecture (blackwell). See our full methodology.

Frequently Asked Questions

How many AI models can run on B300?

The B300 can run 280 AI models from our database within a single node. Compatible models range across various parameter sizes depending on the quantization precision (BF16, FP8, INT4). Smaller models fit on a single GPU while larger models may require multi-GPU setups up to 8x B300.

What is the B300 inference throughput?

The B300 delivers 2800 BF16 TFLOPS and 5600 FP8 TFLOPS with 12000 GB/s memory bandwidth. Actual inference throughput (tokens/sec) depends on the model size, precision, and batch size. Use our calculator for model-specific throughput estimates.

How much does B300 cost per hour?

The B300 is available starting from $0.00/hour via nvidia. Prices vary by provider and pricing tier (on-demand, reserved, spot). Compare pricing across all providers in the table above.