While I agree that Kubernetes might not only be for Black Friday, I still believe it's way overkill for a homelab.
In this post, I will describe my own homelab setup and why I ditched Kubernetes last January.
I've been trying to find the right match for my homelab for quite some time and I've wandered between various distributions, OSes and so on... I've used Nix quite extensively for a time but it was way too much maintenance (because the all-declarative approach does not overcome the cost of having to manage your own derivations).
I also tried Kubernetes for more than a year, and it proved to be way too heavy. It had some really good sides (like using Helm for deploying and leveraging project's charts directly), but the cost of managing my own cluster was huge.
The trigger to leave Kube behind came earlier this year at FOSDEM '25, when I saw Axel Stefanini’s Talk about Running Containers Under Systemd: Exploring Podman Quadlet. This was the moment I thought "well, my homelab's gonna change stack once again".
Homelab Reloaded: Quadlets
Quadlets are the perfect match between Kubernetes' ease of managing a service and a systemd service on the host.
Using Podman Quadlets I've been able to deploy my homelab services with a single file per service while having their lifecycle managed by systemd and avoiding all the Docker-related overhead (like docker-compose or running a daemon).
For instance, here is the file for deploying Vaultwarden (a lightweight reimplementation of Bitwarden):
[Unit]
Description=Vaultwarden Server
After=postgres.service
[Container]
Image=docker.io/vaultwarden/server:1.34.1-alpine
PublishPort=127.0.0.1:80:80
EnvironmentFile=/etc/configs/vaultwarden.env
Volume=vaultwarden-data.volume:/data
[Service]
Restart=always
[Install]
WantedBy=multi-user.target default.target
That's it. I now have a vaultwarden running. Quadlets are extremely convenient for managing dependencies between containers, this line:
After=postgres.service
Specifies that my container should come after postgres.service
is up and running, but this service is also a quadlet:
[Unit]
Description=PostgreSQL Database Server
After=network.target
[Container]
Image=docker.io/library/postgres:17
NetworkAlias=postgres
Environment=POSTGRES_PASSWORD=not_my_passwd
Volume=postgres-data.volume:/var/lib/postgresql/data
PublishPort=5432:5432
[Service]
Restart=always
[Install]
WantedBy=multi-user.target default.target
You can also declare all kinds of other resources: volumes, networks... All of this can be found in the manpages of podman-systemd.unit.
And what about the OS?
Another thing I didn't like about hosting my own Kubernetes is that I had to also manage the underlying operating system. So, this time, I wanted to find an operating system that would:
- Update itself unattended
- Be immutable
- As small as possible
- Ideally, with SELinux
Bonus points if it wasn't related to RedHat.
I found openSUSE MicroOS, fitting all these criterias.
With MicroOS setup you can choose the "container" mode that comes with Podman already running and configured. Place your *.container
, *.volume
, *.network
files in /etc/containers/systemd
and you're good to go. Simply start the service using systemctl start
.
To copy those files and start the services, I created an ansible playbook since I template a bit my files so I can use variables for things like secrets between the hosts. Nothing I wouldn't have to do on a helm chart anyway.
Finally, for MicroOS on the cloud, I've switched to Hetzner that provides MicroOS as an available operating system for their VPS.
Next steps
I know Podman Quadlet has a system to use directly Kubernetes deployment definitions but I couldn't find a good way of using it with my currently deployed helm charts so I've taken the path of rewriting them in systemd format. I still need to check a bit more about the podman kube
command in the future.
In conclusion, was this migration worth it?
Yes! It was a bit tedious to rewrite all of my deployments on systemd format (only because they were so much of them) but I haven’t had to think about the operating system nor the deployment for a while. I know the host will update and reboot automatically in an atomic fashion and thus be reversible easily if anything goes south.