The GPU Landscape in 2025
NVIDIA's datacenter GPU lineup has expanded significantly. Let's break down the three GPUs available on Wollnut Labs and when to use each.
NVIDIA H100 (80GB SXM)
The H100 remains the workhorse of AI infrastructure. With 80GB of HBM3 memory and strong FP8 performance, it handles most ML workloads effectively.
Best for:
Pricing on Wollnut Labs: $2.25/hr per GPU
NVIDIA H200 (141GB SXM)
The H200 is the H100's successor with nearly double the memory (141GB HBM3e). This extra memory is transformational for large model work.
Best for:
Pricing on Wollnut Labs: $2.75/hr per GPU
NVIDIA B200 (192GB HBM3e)
The B200 is NVIDIA's flagship Blackwell architecture GPU. With 192GB of memory and next-gen compute capabilities, it's built for frontier model work.
Best for:
Pricing on Wollnut Labs: $5.90/hr per GPU
Quick Decision Guide
| Workload | Recommended GPU |
|---|---|
| Fine-tune 7B-13B model | H100 1x |
| Fine-tune 70B model (LoRA) | H100 2x or H200 1x |
| Run DeepSeek R1 inference | H100 8x or H200 4x |
| Train custom model from scratch | H200 8x or B200 |
| Image generation (SDXL/Flux) | H100 1x |
| Whisper transcription | H100 1x |
The Bottom Line
Start with an H100 for most workloads. Move to H200 when you need more memory. Use B200 for frontier-scale work. With per-minute billing, you can experiment freely without commitment.
