infrastructure

Sovereign Homelab

Owner, operator, on-call 2022–Present 49 containers, bare metal
Docker Linux Caddy NetBird Vaultwarden PostgreSQL Redis CouchDB

Why Run Your Own

There’s a specific satisfaction in running your own infrastructure — not as a proof of concept, but as a daily driver. Every service in this stack is something I use. When it goes down, I feel it. That pressure makes you actually care about reliability instead of just talking about it.

Architecture

The hardware is a single bare-metal box — no cloud provider, no managed service, no abstraction layer I can’t trace. Docker Compose orchestrates everything. The network is flat, the storage is local, and the backups go to an external drive.

The reverse proxy is Caddy. It handles TLS automatically, routes subdomains to the right containers, and requires almost no maintenance. Behind it sits the actual stack:

  • DNSGuard — local resolver, no upstream unless necessary
  • NetBird — mesh VPN for remote access
  • Vaultwarden — secrets, self-hosted Bitwarden-compatible
  • PostgreSQL, Redis, CouchDB — data layer depending on what needs it
  • Grafana, Loki, Prometheus, Alloy — observability

Observability

This is the part that separates a toy from something you can actually rely on. Every container exports metrics. Alloy collects, Prometheus stores, Grafana visualizes. Loki handles logs. node_exporter, process_exporter, and smartctl_exporter keep an eye on the hardware itself.

When a container restarts at 2 AM, I get a Grafana alert before I notice anything’s wrong. That’s the goal — the system tells you what’s broken before you have to find out.

What I’d Change

Single box, single point of failure. The next step would be a second node with automatic failover. But at this scale, it’s a tradeoff: complexity vs reliability. I’m comfortable here for now.

contact

Pick a channel.