400G networking • NVLink/NVSwitch HGX • Iceland & US regions

GPU cloud hosting for people who actually ship models

H100, A100, L40S. Copy-paste configs. Launch in ~90 seconds.

Launch a GPU
400G networking
NVLink/NVSwitch (HGX) available
Iceland & US
MIG-ready

Pick your hardware

All configs include NVMe storage, 400G networking, and full root access

MOST POPULAR

H100 80GB

$3.49 /hr
80GB HBM3

~2000 tok/s (Llama-3-70B, BF16)

A100 80GB

$1.89 /hr
80GB HBM2e

~1200 tok/s (Llama-3-70B, BF16)

L40S 48GB

$0.89 /hr
48GB GDDR6

~25 it/s (SDXL training, BF16)

A6000 48GB

$0.59 /hr
48GB GDDR6

~600 tok/s (Llama-3-8B, BF16)

Copy-paste configs

Working snippets for PyTorch, vLLM, SDXL, and more

# PyTorch training on H100
docker run --gpus all -v $(pwd):/workspace \
  nvcr.io/nvidia/pytorch:24.01-py3 \
  python train.py --fp16 --batch-size 32

# What you paste here runs the same in production.

"What you paste here runs the same in production."

Real benchmarks

Measured throughput and cost per million tokens

Model Precision GPU Throughput $ per 1M tokens
Llama-3-8B BF16 H100 2800 tok/s $1.25
Llama-3-70B BF16 H100 x2 2000 tok/s $3.49
Mixtral-8x7B BF16 A100 1500 tok/s $1.26
SDXL (train) FP16 L40S 25 it/s $0.89
Llama-3-8B INT4 A6000 1200 tok/s $0.49

Transparent pricing

No hidden fees. Cancel anytime.

On-Demand

Pay as you go

Standard rates
  • 4-hour SLA response
  • 400G networking
  • Email support
  • Standard bandwidth
  • 9-5 support window
BEST VALUE

Reserved

1-3 month commitment

15% discount
  • 2-hour SLA response
  • 400G networking
  • Priority support
  • Enhanced bandwidth
  • Extended support window

Committed

12 month commitment

30% discount
  • 1-hour SLA response
  • 400G networking
  • Dedicated engineer
  • Unlimited bandwidth
  • 24/7 support

Global presence

Strategic locations with 400G networking and renewable energy

Iceland

Reykjavik

<15ms to Europe
  • 100% renewable energy
  • 400G backbone
  • Cold climate cooling
  • Low-latency to EU/UK

USA

Idaho

<20ms to West Coast
  • Hydroelectric power
  • 400G backbone
  • Carrier-grade connectivity
  • Low-latency to US

More regions coming soon. Need a specific location? Let us know

Why GPU Core

Built for engineers who ship production workloads

MIG slices that work

Multi-Instance GPU support with proper K8s primitives. No weird workarounds.

400G where it matters

High-bandwidth networking between nodes. Not marketing fluff.

Real I/O

NVMe + GPUDirect Storage ready. Move data at GPU speed.

Containers that boot

Docker, K8s, or bare metal. Your choice. Sub-60s cold starts.

Transparent pricing

What you see is what you pay. No surprise bandwidth charges.

Humans on call

Real engineers who understand your workload. Not chatbots.

How we compare

GPU Core vs typical cloud GPU providers

Feature GPU Core Others
Price clarity
400G networking
MIG with K8s UX
No minimum term
Real engineer support

Migrate in an afternoon

We help you move from CoreWeave, RunPod, or other providers with zero downtime.

Frequently asked questions

Yes. H100 HGX configurations include NVLink and NVSwitch for high-speed GPU-to-GPU communication. Perfect for large model training and multi-GPU inference.