Skip to content
Updated minutes ago
Stability

Japanese StableLM 70B

Stability AI · dense · 70B parameters · 8,192 context

Quality
50.0

Parameters

70B

Context Window

8K tokens

Architecture

Dense

Best GPU

H200 SXM

Intelligence Brief

Japanese StableLM 70B is a 70B parameter DENSE model from Stability AI, featuring Grouped Query Attention (GQA) with 80 layers and 8,192 hidden dimensions. With a 8,192 token context window, it supports multilingual. For self-hosted inference, H200 SXM delivers optimal throughput at $2553/month.

Architecture Details

TypeDENSE
Total Parameters70B
Active Parameters70B
Layers80
Hidden Dimension8,192
Attention Heads64
KV Heads8
Head Dimension128
Vocab Size65,536

Memory Requirements

BF16 Weights

140.0 GB

FP8 Weights

70.0 GB

INT4 Weights

35.0 GB

KV-Cache per Token655360 bytes
Activation Estimate3.00 GB

GPU Compatibility Matrix

Japanese StableLM 70B is compatible with 38% of GPU configurations across 41 GPUs at 3 precision levels.

BF16 (Full)
FP8 (Half)
INT4 (Quarter)
Blackwell(7 GPUs)
B200 NVL (pair)360GB
B300288GB
B100 SXM192GB
GB200 NVL72 (per GPU)192GB
Hopper(7 GPUs)
H100 NVL 94GB (per GPU pair)188GB
H200 SXM141GB
H2096GB
GH20096GB
Ada Lovelace(11 GPUs)
L40S48GB
L4048GB
RTX 6000 Ada48GB
L2048GB
Ampere(16 GPUs)
A100 80GB SXM80GB
A100 80GB PCIe80GB
A1664GB
RTX A600048GB
Legend:No fitVery tightTightModerateGoodExcellent

GPU Recommendations

H200 SXMoptimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

560.0 tok/s

Latency (ITL)

1.8ms

Est. TTFT

0ms

Cost/Month

$2553

Cost/M Tokens

$1.73

Use this config →
H20optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

478.0 tok/s

Latency (ITL)

2.1ms

Est. TTFT

0ms

Cost/Month

$940

Cost/M Tokens

$0.75

Use this config →
GH200optimal

FP8 · 1 GPU · tensorrt-llm

100/100

score

Throughput

478.0 tok/s

Latency (ITL)

2.1ms

Est. TTFT

0ms

Cost/Month

$2838

Cost/M Tokens

$2.26

Use this config →

Deployment Options

API

API Deployment

No API pricing available

Self-Hosted

Single GPU

H200 SXM

$2553/mo

Min VRAM: 70 GB

Scale

Multi-GPU

H100 SXM x2

560.0 tok/s

TP· $3587/mo

API Pricing Comparison

No API pricing data available for this model.

Performance Estimates

Throughput by GPU

H200 SXM
560.0 tok/s
H20
478.0 tok/s
GH200
478.0 tok/s

VRAM Breakdown (H200 SXM, FP8)

Weights
Act
Weights 70.0 GBKV-Cache 2.7 GBActivations 24.0 GBOverhead 3.5 GB

Precision Impact

bf16

140.0 GB

weights/GPU

fp8

70.0 GB

weights/GPU

~560.0 tok/s

int4

35.0 GB

weights/GPU

Capabilities

Features

Tool Use Vision Code Math Reasoning Multilingual Structured Output

Supported Frameworks

vllmsglangtgi

Supported Precisions

BF16 (default)FP8INT4

Where to Deploy Japanese StableLM 70B

Similar Models

Code Llama 70B

70B params · dense

Quality: 60

from $0.90/M

Larger context, Higher qualityCompare →

Llama 2 70B

70B params · dense

Quality: 62

from $0.90/M

Higher qualityCompare →

Claude Sonnet 4

70B params · dense

Quality: 86

from $15.00/M

Larger context, Higher qualityCompare →

o1-mini

70B params · dense

Quality: 83

from $12.00/M

Larger context, Higher qualityCompare →

Frequently Asked Questions

How much VRAM does Japanese StableLM 70B need for inference?

Japanese StableLM 70B requires approximately 140.0 GB of VRAM at BF16 precision, 70.0 GB at FP8, or 35.0 GB at INT4 quantization. Additional VRAM is needed for KV-cache (655360 bytes per token) and activations (~3.00 GB).

What is the best GPU for Japanese StableLM 70B?

The top recommended GPU for Japanese StableLM 70B is the H200 SXM using FP8 precision. It achieves approximately 560.0 tokens/sec at an estimated cost of $2553/month ($1.73/M tokens). Score: 100/100.

How much does Japanese StableLM 70B inference cost?

Japanese StableLM 70B inference costs vary by provider and GPU setup. Use our calculator for detailed cost estimates across all providers.