The Setup

MY SETUP

The Hardware

This is what is running. I’m using cloud models while I learn, with local inference testing on the AI HAT+.

THE GEAR

What Is Running This

🥧

Raspberry Pi 5 (8GB)

8GB RAM. ARM64. Low power, always on. The main computer.

8GB RAMARM6424/7
🥧

Raspberry Pi 5 (16GB)

16GB RAM. ARM64. More memory for larger models.

16GB RAMARM64
🖥️

Mini PC

16GB RAM. AMD Ryzen 4300U. For heavier workloads.

16GB RAMRyzen 4300U
🧠

AI HAT+ (21 TOPS)

8GB RAM on the NPU. Testing local models.

21 TOPS8GB RAMHailo NPU

OpenClaw

The agent framework. Skills-based architecture. Community-driven.

Node.jsSkillsOpen Source
🔮

Ollama Cloud

GLM-5, Nematron via Ollama. Cloud models while I explore local inference.

OllamaHybridLearning
WHAT I’M TESTING

Work in Progress

Real experiments. Real results. I share what I learn as I go.

Running Local AI

I’m experimenting with running small AI models directly on the AI HAT+. No internet needed. Still figuring out what works. If it does, I’ll have AI that runs even when the connection goes down.

Fast Storage – NVMe Experiments

What I Tried

Connected a fast NVMe SSD via USB to the Pi. Real-world result: about 32 MB/s. The USB connection was the bottleneck, not the drive.

What I Learned

Pi 5 USB 3.0 ports max out around 300-400 MB/s in ideal conditions. Cheap or older USB cables make it worse. The SSD itself is rated for over 1,000 MB/s, but the pipe was too narrow.

Next Plan

I have a 3.6TB NVMe drive. Going to connect it directly to the Pi’s PCIe slot instead of through USB. That should unlock the drive’s real speed: 3,500 MB/s.

Trade-off

The PCIe slot is also where the AI HAT+ goes. I can have fast storage OR local AI inference, not both at the same time. Figuring out which matters more right now.

Want to Build Something Similar?

This is my learning journey. If you want to build your own, or partner on OpenClaw, reach out.

YouGotThisAI LLC

© 2026 All Rights Reserved