🇮🇳 India's Sovereign AI Neocloud — Built for Builders

The Neocloud
India Deserves.

NVIDIA H100 · H200 · B200 clusters. Bare-metal performance. Zero lock-in. Deployed from Dholera, Gujarat. Sub-millisecond latency. Data stays in India.

₹240/hrH100 On-Demand
·
99.9%Power SLA
·
400GbpsInfiniBand Fabric
·
<15minCritical Response
·
GujaratGIFT City
·
24/7 NOCDedicated Support
·
NVIDIADGX-Ready Infra
·
ISO 27001Security Certified
·
₹240/hrH100 On-Demand
·
99.9%Power SLA
·
400GbpsInfiniBand Fabric
·
<15minCritical Response
·
GujaratGIFT City
·
24/7 NOCDedicated Support
// Neocloud Instances

On-Demand & Reserved
GPU Compute

Single GPU to 1,000+ GPU clusters. Spin up in minutes. Pay by the hour. Reserve for up to 40% savings. All NVIDIA hardware. All data stays in India.

NVIDIA Ada Lovelace
// Starter
L40S 48GB
Per GPU Instance · OL
₹80/hr
Reserved 1yr: ₹58/hr · Save 27%
GPU Memory 48 GB GDDR6
vCPUs 24 vCPUs
RAM 96 GB DDR5
Storage 2 TB NVMe
Network 10 Gbps
TFLOPS (FP16) 362 TFLOPS
Ubuntu 22.04 Ubuntu 24.04 Debian 12 Rocky 9

Deploy L40S →
NVIDIA Ampere
// Standard
A100 80GB
Per GPU Instance · SXM4
₹150/hr
Reserved 1yr: ₹108/hr · Save 28%
GPU Memory 80 GB HBM2e
vCPUs 48 vCPUs
RAM 192 GB DDR4
Storage 5 TB NVMe
Network 25 Gbps
TFLOPS (FP16) 312 TFLOPS
Ubuntu 22.04 Ubuntu 24.04 CentOS 9 Debian 12 Windows

Deploy A100 →
NVIDIA Hopper Ultra
// Ultra
H200 141GB
Per GPU Instance · SXM5
₹380/hr
Reserved 1yr: ₹275/hr · Save 28%
GPU Memory 141 GB HBM3e
vCPUs 96 vCPUs
RAM 512 GB DDR5
Storage 20 TB NVMe
Network 400 Gbps IB
Mem BW 4.8 TB/s
Ubuntu 22.04 Ubuntu 24.04 Rocky 9 Debian 12

Deploy H200 →
NVIDIA Blackwell
// Sovereign
B200 180GB
Per GPU Instance · SXM6
₹650/hr
Reserved 1yr: ₹470/hr · Save 28%
GPU Memory 180 GB HBM3e
vCPUs 128 vCPUs
RAM 1 TB DDR5
Storage 40 TB NVMe
Network 800 Gbps IB
TFLOPS (FP4) 9,000 TFLOPS
Ubuntu 24.04 Rocky 9

Deploy B200 →
// HPC Clusters

Multi-GPU Cluster Pricing

Full DGX-class nodes with NVSwitch, InfiniBand fabric, and dedicated networking. For LLM training, fine-tuning, and distributed inference at scale.

Cluster Config GPU GPU Mem vCPUs RAM TFLOPS (FP8) NVMe Storage Network On-Demand / hr Reserved 1yr / hr Workload
8× A100 Node Available 8× NVIDIA A100 SXM4 8× 80GB HBM2e 128 1 TB DDR4 2,496 TFLOPS 30 TB NVMe 200Gbps RoCE ₹1,150/hr ₹850/hr Inference · Fine-tune
8× H100 Node Available 8× NVIDIA H100 SXM5 8× 80GB HBM3 192 2 TB DDR5 31,664 TFLOPS 60 TB NVMe 400Gbps InfiniBand ₹1,800/hr ₹1,300/hr LLM Training · Fine-tune
16× H100 Cluster Available 16× NVIDIA H100 SXM5 16× 80GB HBM3 384 4 TB DDR5 63,328 TFLOPS 120 TB NVMe 800Gbps InfiniBand ₹3,500/hr ₹2,550/hr Large-Scale Training
32× H100 Cluster Available 32× NVIDIA H100 SXM5 32× 80GB HBM3 768 8 TB DDR5 126,656 TFLOPS 240 TB NVMe 1.6Tbps InfiniBand ₹6,800/hr ₹4,900/hr Foundation Models
8× H200 Node Available 8× NVIDIA H200 SXM5 8× 141GB HBM3e 256 3 TB DDR5 31,664 TFLOPS 80 TB NVMe 800Gbps InfiniBand ₹2,900/hr ₹2,100/hr LLM + Long Context
8× B200 Node Q3 2026 8× NVIDIA B200 SXM6 8× 180GB HBM3e 320 4 TB DDR5 72,000 TFLOPS 160 TB NVMe 1.6Tbps InfiniBand ₹4,900/hr ₹3,500/hr Frontier AI · Blackwell
Custom ≥100 GPU Enterprise Mix: H100 / H200 / B200 Custom Custom Custom Custom Up to 1 PB Custom IB Fabric Negotiated Reserved Contract Sovereign AI · Govt
// Storage

High-Performance Storage

From blazing-fast NVMe scratch to durable object storage. Designed for AI data pipelines, checkpointing, and large dataset handling.

// Local NVMe Scratch
Ephemeral SSD
Included
High-speed local NVMe included with every GPU instance. 2–160 TB depending on instance type. Ideal for training scratch space and model checkpoints.
// Block Storage
Persistent SSD
₹6/GB/month
NVMe-backed persistent block volumes. Up to 100 TB per volume. <1ms latency. IOPS: up to 500,000. Perfect for datasets and model repos.
// Shared Filesystem
Parallel FS (Lustre)
₹4/GB/month
High-throughput parallel filesystem. 1M+ IOPS aggregate. Up to 1 PB. Shared across multi-node clusters for distributed training.
// Object Storage
Wollnut S3-Compatible
₹1.2/GB/month
S3-compatible object store. Unlimited capacity. 99.999999999% durability. Egress within India: ₹0. Ideal for datasets, model weights, outputs.
// Snapshot & Backup
GPU Snapshots
₹0.8/GB/month
Point-in-time snapshots of running GPU instances including OS, CUDA stack, and libraries. Instant restore. Save your entire training environment.
// AI Model Registry
Wollnut Model Store
₹1.5/GB/month
Versioned model registry with direct GPU-mount capability. Compatible with HuggingFace Hub API. Deploy models directly to inference endpoints.
// Networking

Fabric & Connectivity Pricing

From GPU-to-GPU InfiniBand to global CDN — we handle the full network stack so your models train faster and your APIs respond in milliseconds.

200G
InfiniBand HDR
Included · H100 Single Node
Per-GPU InfiniBand for single-node H100 training. NVSwitch all-to-all fabric. Full bisection bandwidth.
400G
InfiniBand NDR
Included · Multi-Node H100/H200
Scale-out fabric for multi-node distributed training. RoCE v2 fallback. RDMA-enabled. Linear scaling to 1,000+ GPUs.
800G
InfiniBand NDR2
Included · B200 Clusters
Next-gen 800GbE / NDR InfiniBand. Arista 7800R4 spine. Designed for Blackwell-scale training runs.
10G
Public Egress
₹3/GB egress · India free
Tier-1 uplinks via Tata, Airtel, BSNL. India-India traffic is free. Ingress always free. DDoS protection included.
1G
Dedicated MPLS
From ₹80,000/month
Private MPLS links to enterprise premises. Mumbai/Bengaluru/Delhi PoPs. 99.99% availability. For regulated sectors.
100G
Cross-Connect
₹25,000/month
Direct cross-connect to Yotta, CtrlS, STT or hyperscaler PoPpers in Mumbai/Chennai. Sub-ms latency.
// Data Center Infrastructure

Built for AI. Built in Gujarat.

Our GIFT City facility is purpose-built for high-density GPU computing. Every MW designed for liquid-cooled, HPC-grade workloads from day one.

99.9%
Power Availability
99.7%
Compute SLA
99.9%
Network SLA
<15m
Critical Response
Power & Redundancy
Utility-grade power with N+1 UPS, 2N critical paths, and diesel backup. Zero single points of failure.
  • 3 MW Phase 1 IT Load (scalable to 30 MW)
  • PUE: 1.35 with liquid cooling
  • HT/EHT substation — direct utility feed
  • 2N UPS — Schneider Electric Galaxy VL
  • N+1 CAT diesel generators
  • Renewable energy target: 60% solar
❄️
Cooling Architecture
Rear-door heat exchangers + direct-to-chip liquid cooling for H100/H200/B200 racks up to 120kW density.
  • Air + Liquid hybrid cooling
  • Direct-to-chip: 120 kW/rack capable
  • CRAC precision air cooling baseline
  • Rear-door heat exchangers (Schneider)
  • Adiabatic cooling for water efficiency
  • WUE target: <0.3L per kWh
🌐
Networking Fabric
Arista 7800R4 spine, NVIDIA QM9700 InfiniBand switches. Full bisection bandwidth for any cluster size.
  • Arista 7800R4 — 3.2 Tbps spine
  • NVIDIA QM9700 InfiniBand switches
  • 800GbE fabric for Blackwell clusters
  • RDMA over Converged Ethernet (RoCE)
  • 100G multi-homed internet uplinks
  • BGP anycast, full DDoS mitigation
🔒
Security & Compliance
Physical + cyber. MeitY cloud empanelled. ISO 27001 target. Data sovereignty guaranteed — no data leaves India.
  • Biometric + mantrap physical access
  • ISO 27001 certification (in progress)
  • SOC 2 Type II audit path
  • End-to-end encryption: MACsec + IPsec
  • 24/7 SOC with SIEM integration
  • MeitY empanelment roadmap
🧠
GPU Fleet OEM Stack
Best-in-class OEM mix for GPU servers, networking, and power — built for reliability at hyperscale density.
  • GPU: NVIDIA H100/H200/B200 SXM
  • Servers: Supermicro / Netweb / HPE
  • Networking: Arista + NVIDIA QSFP800
  • Power: Schneider Electric UPS + PDU
  • Cooling: Schneider + Motivair DLC
  • Storage: Pure Storage / DDN Lustre
🛠️
Software Stack
Kubernetes-native platform with Run:AI orchestration, vLLM inference, and full observability. OpenAI-compatible API.
  • Orchestration: Kubernetes + Run:AI
  • Inference: vLLM + TensorRT-LLM
  • MLOps: Kubeflow + MLflow
  • Monitoring: Prometheus + Grafana
  • DCIM: Schneider EcoStruxure
  • API: OpenAI-compatible REST/gRPC
// Availability Zones

India-First. India-Only.

All infrastructure physically located in India. Your data never leaves Indian jurisdiction. Sovereign AI compute from coast to coast.

WL-IN-GJ-DHL01 · GIFT City
Dholera Special Investment Region, Gujarat · Primary AZ
Live Q4 2026
COMING LIVE
H100 · H200 · B200 · Phase 1: 3 MW → 30 MW
GIFT City
Gujarat International Finance Tec-City, Gandhinagar · BFSI AZ
2027
PLANNING
BFSI · Fintech · Govt · Sovereign zone
WL-IN-MH-MUM01 · Mumbai Metro
Navi Mumbai / Airoli, Maharashtra · Enterprise AZ
2027
PLANNING
Hyperscaler PoP · Enterprise · Startup
WL-IN-KA-BLR01 · Bengaluru
Whitefield, Karnataka · Tech Hub AZ
2028
ROADMAP
Startup · Research · Inference edge
// Competitive Benchmark

Why Wollnut Labs

Compared to hyperscalers and Indian neocloud providers — on the dimensions that matter when you're building AI in India.

Feature Wollnut Labs AWS / Azure Neysa Yotta E2E Cloud CoreWeave
H100 On-Demand (per GPU/hr) ₹240 ₹340–500 Custom ₹280–350 ₹220–300 ₹520+
Data stays in India ✓ Always ⚠ Region-based ✗ USA
B200 / Blackwell Available ✓ Q3 2026 ⚠ Limited
Gujarat / GIFT City Presence ✓ Primary DC
Bare Metal Dedicated Clusters ✗ Shared
24/7 Support <15min response ✗ Enterprise only
Reserved Contract (12/24/36 mo) ✓ Up to 40% off
MeitY Empanelled ⚠ In progress
OpenAI-Compatible Inference API ⚠ Proprietary
Liquid Cooling (DLC) Ready ✓ from Day 1 ⚠ Selective
// Get Started

Ready to Deploy at Scale?

Talk to our solutions team for custom cluster pricing, enterprise SLAs, and reserved contracts. We respond within 2 business hours.

wollnut91@gmail.com → +91 98251 36755

Wollnut Labs Pvt. Ltd. GIFT City, Gujarat 382460 CIN: U72900GJ2026PTC000000 GSTIN: 24XXXXX1234Z1