I've been reading a lot of posts about migrating to Nix and how it's better at managing the system lately. While I agree it has its strengths, I found some issues that led me to reconsider using it as my main operating system for my servers and desktop.
A bit of history
I discovered NixOS back in 2018, and I quickly fell into the rabbit hole. I was in love with the idea. Everything was coming together nicely, and to be honest, using it felt like magic.
I first switched my laptop to it, then I started migrating my desktop and my servers. Yet, in 2024, I decided to stop using it. It had become a pain to maintain. Now I'm using openSUSE MicroOS with Podman Quadlets with a deployment I've detailed in my previous post, so my response will use this system as a comparison.
This post is written as if I were discussing with my past self: each section is named after how I would have defended the use of NixOS, and contains the response I have to this argument today.
Where it fell short
"System and package updates are automated and can be reverted easily if needed."
NixOS is truly efficient at this task: modules are used to enable various services, and are automatically configured. When the system is updated, it just fetches the newest nixpkgs
successful build by Hydra (the NixOS build farm) and updates. If the package changes its configuration, the module can abstract it away from this, and users would not even notice the breaking changes.
On paper, that’s all one could hope for, but I ran into various issues. For instance, services were sometimes broken on my Nix setup, not because the module was faulty, but because the underlying software was broken, so I had to revert.
But you know what does not revert well? Data structures of the program that get downgraded during the rollback: for instance, sometimes a software "A" is updated and broken, and must be rolled back. But the update that brought the new version of software "A" also includes software "B", and while "A" can be rolled back, "B" cannot due to database migration, for instance. When this happens, I find myself adding two nixpkgs
remotes pinned at different commits to have the two software packages running together while waiting for an update of software "A".
Finally, even if the required software "A" update is available, it sometimes cannot be used because nixpkgs
is stuck due to Hydra stumbling upon another issue.
Now, with my current approach of using an immutable OS with Quadlets, I have almost the same advantages as a NixOS setup. Updates can be made atomically and are easily reversible if anything goes wrong, and packages are not updated automatically: I have a centralized deployment repository with all versions defined as variables I can update easily. I can also just use SemVer Docker image tagging (like software-container:1
) with a forced-pull option on the Quadlet configuration to force the update at each automatic reboot, but for now, I’d rather pin the exact version and update when I have time to check changelogs.
"All deployments are using the same language, and there is no template needed."
Most packaged services will provide a way to be configured easily. Usually in a services.service_name.config
map, but most deployments will need to use something like extraConfig
, and it's quite a mess: if you're lucky, it’s going to be a map that will be translated to JSON and merged with the main configuration, and if not, it’ll be a plain string that will be appended to the configuration file, making it a bit hacky if the service requires things in configuration sections that are already defined using the module abstractions.
And it’s not homogeneous: sometimes there are a few options, and other times the entire configuration file is mapped, making the abstraction layer so slim that one can wonder if it’s not being nixified for nixification’s sake.
Finally, managing secrets is tricky since everything ends up stored in the Nix store, which is world-readable. Security-wise, it's an issue. Services run on the OS directly (and not in a container), and if compromised, expose all secrets.
My setup provides almost the same advantages: the containers, the networks, the volumes, etc., are all declared with the same format (systemd unit format). It still needs templating, but that can be achieved by Ansible. Secrets stay in files that are not world-readable and are not accessible to the service that runs on the machine itself, and they're also kept in an Ansible Vault in my private repository.
"I can use the Nix daemon to deploy all my servers from my desktop."
Nix derivations can be built locally and then pushed to a server. It can be useful when you have slow servers. But sharing already built software is more complicated with Nix than with containers, for custom software, it requires the setup of custom store caches (like Cachix), making it harder to use.
My current Quadlets setup gives the same advantages without some of those issues: it can build the container images locally or even use the already-built containers provided by the project from GHCR (GitHub Container Registry) or Docker Hub. Also, using Ansible, remote deployment isn't an issue.
"I can easily port a service to another server using my modularized flakes."
As long as NixOS runs on a machine, the declaration can be reused, but it's valid for every portable way of deploying software: be it Podman, Docker, Nix, Flatpak, etc., given target system is running the underlying software.
"NixOS has the freshest set of packages, and even if some are outdated, the derivation can be modified easily."
It’s not fair because it’s comparing nixpkgs
with other repositories like Arch/AUR or openSUSE Tumbleweed: it would be fairer to compare it to Docker Hub when it comes to services that can be deployed, and it might not be the freshest. Software is probably more up-to-date when published directly from the software provider that pushes to GHCR or Docker Hub.
Also, software sometimes stays outdated for a while with Hydra being stuck, and it requires either waiting or taking the updated derivation from a pinned version of nixpkgs
and overriding everything.
I still use Nix as a package manager on other OSes
Most of my criticisms concern NixOS as a complete operating system, but it has a lot of strength when it comes to the development environment using flakes. Even if the Flake API is far from perfect, it’s easy to create a reproducible and portable dev environment that will natively run on macOS and Linux, and even Windows with WSL.
I’ve been managing a development environment this way at my previous jobs, and some of my colleagues relied exclusively on it and sometimes even ported it to other repositories related to the project, but I wasn’t actively maintaining it. The handiness for developers was truly the main argument that helped drive some adoption there.
Conclusion
I still understand why people like NixOS. It’s not a bad OS per se, it’s just not a fit for me for all the reasons I’ve mentioned before, because I do not want to bother with this. I want my OS and deployment to be silent or at most background noise in my life: I want to enjoy the services I provide to myself without having the pain of either relying on SaaS or the difficult parts of hosting.
Even my current setup still requires a bit of work, and I do not want to recreate the conditions of a job in my personal life now that I've left IT. Maybe if I were still working in this field and was still actively managing infrastructures, the time of NixOS would be something I’d be willing to put into: but that’s not where I’m at in my life, and I like to enjoy the fact that I can put this outside of my mind unless I'm working actively.