202409221431
Status: #idea
Tags: #tech #unraid #nas #nixos
migrating from unraid to nixos
Purpose
My current application of unRAID is a little janky in that I'm not using the built-in docker container manager or the CA store. Instead, I'm using the compose-manager plugin in order to install Komodo, and manage my stacks within that. It just doesn't make any sense to run so many layers that don't add any functionality for my use-case. Ideally, I'd like to have a minimal backend to free up resources for the systems doing the heavy lifting.
Additionally, I hardly use any of the benefits unRAID offers, particularly the VM manager or even parity, though I may consider that down the road (but probably not because it's a waste of raw storage).
Justification
The foundation for my new NAS will be based on nixOS. My reasoning is that, by nature, nixOS is, first and foremost, reproducible. The idea here is that the entire configuration will reside on github, as I'm currently doing with my compose files. This, in addition to my automated appdata backup using kopia, means that if my server decides to implode, reconfiguration should only take a matter of minutes. A simplistic evaluation of pros/cons compared to unRAID follows:
Pros
- Reproducible
- Off-site git backups for critical configurations
- No USB dependence
- No drive limit
- Handrolled for a fully customized deployment
- Free (sanity aside)
- Nerdy as fuck
Cons
- Not hardware agnostic (easily mitigated)
- No central GUI (unused anyway...)
- No support (professional/community)
- Could implode at any moment
Implementation
I'm not entirely sure how I'm going to accomplish this. But, to start, I would use a nixOS VM so that I can build this system properly and over time without any disruption to current services. With the reproducible nature of nixOS, I could simply use the configuration I build in the VM as the base for the new NAS and it should only require changes to a line or two before it can be deployed. Allegedly. This is going to go to shit super quick.
I have a few things in mind: (this is what passes for a plan)
- Deploy nixOS VM with virtualized parameters resembling the host
- 8GB RAM should suffice for testing
- 4x8GB virtual hard disks to simulate array
- 2x8GB virtual hard disks to simulate cache
- Configure networking
- Use 10.0.0.20 as a placeholder IPv4 address before moving to production 10.0.0.10
- Possibly implement IPv6 addressing, though it's not supported by all clients and could pose complications for emby and other forward facing services.
- Set up and configure docker
- Might be able to just use flakes and nixpkgs instead of docker, which could be... interesting.
- Configure SSH
- Generate SSH keys for crate-laptop and crate-desktop
- Configure MergeFS to build the drive pool
- Consider SnapRAID or alternatives for bit parity - Deploy nixOS VM with virtualized parameters resembling the host
- 8GB RAM should suffice for testing
- 4x8GB virtual hard disks to simulate array
- 2x8GB virtual hard disks to simulate cache
- Configure networking
- Use 10.0.0.20 as a placeholder IPv4 address before moving to production 10.0.0.10
- Possibly implement IPv6 addressing, though it's not supported by all clients and could pose complications for emby and other forward facing services
- Set up and configure docker
- Might be able to just use flakes and nixpkgs instead of docker, which could be... interesting
- Configure SSH
- Generate SSH keys for crate-laptop and crate-desktop
- Configure MergeFS to build the drive pool
- Consider SnapRAID or alternatives for bit parity
- This might not be necessary as the bulk of my data is easily replaced. The data that can't be replaced (pictures/videos/documents) can be backed up the same way as appdata is with kopia.
- Consider options to replicate mover service
- Might be possible to just use cron job. I dunno. We'll see
- Set up iGPU for transcoding
- Configure emby, sonarr, sabnzbd to verify basic functionality
data: /data
appdata: /appdata
cache:
cache_data: /cache/data
cache_appdata: /cache/appdata
Other things to consider
Gonna have to setup a VM environment so that I can migrate HAOS from the existing server. Could probably run proxmox on top to save myself some headache. I'll have to consider some other solutions for resource limiting to replicate the function of CPU pinning ... no idea if docker or VMs natively support core/thread restrictions, or if I can just put some of my services in jails.
What's important to consider is the shit that I didn't think of, which is certainly a lot.
Execution
So I've done the basics so far.
- Deployed VM with the following structure:
- 50GB / partition
- 4x8GB array mounted to /data
- 2x8GB cache data and appdata mounted to /cache/data and /appdata, respectively
- Enabled mergerFS to create a drive pool. The configuration here is shockingly easy:
fileSystems."/data" = {
fsType = "fuse.mergerfs";
device = "/mnt/disks/*";
options = ["cache.files=partial" "dropcacheonclose=true" "category.create=mfs"];
};
- Added server block to nix-config to keep all nix implementations in the same configuration. I might spin this out into it's own config later as they'll be so different there might not be any of value of having combined. That being said, it's nice to have everything in the same place so nothing gets lost or left behind. I'll keep it like this for now and see if there's any actual downside.
- Converted komodo compose.yaml to docker-compose.nix
- Deployed nix-nas repository built from unraid
- Reconfigured paths:
- /mnt/user/appdata : /appdata
- /mnt/user/data : /data
- /mnt/cache_appdata : /cache/appdata
- /mnt/cache_data : /cache/data
- I've only deployed a few containers because I don't want to run into any conflicts (e.g cloudflare tunnel/traefik)
- Existing stacks are deployed statically and need to be deleted and re-built from the nix-nas repository
- Reconfigured paths:
Notes
Firewall: nixOS has an enabled-by-default firewall that must be bypassed in order to allow access to containers from other devices on the network. This can either be done by allowing access to particular ports (80/443 for traefik) or just disabling it entirely. I don't need a firewall at this level but if I only have to expose a pair of ports then I may as well just leave it on for peace of mind.
Atomic Moves: There's a very real possibility that my existing file structure breaks atomic moves. Maybe. I'll have to look at how my unRAID shares were configured and see if I'm applying it the same way. I think, in effect, I'll want to change /cache/data to be /data/usenet. I don't know if I can mount a physical drive into a mergerFS pool... that seems irresponsible, so that's what I'm going to try and do.
Appdata: Once this is ready for production I'll have to remember to change all the paths in the appdata folder to match the new directory structure (which I'll have to figure out sooner than later).
Compose Files: Earlier I changed instances of crate.dev to whiskeyjack.org so that I could spin up my new NAS without interfering with existing services. This isn't really necessary, and I'll have to change everything back before production.
Disks: I don't think it matters which disk I put where since I'm not using any sort of parity. When I actually do the swap I really should just use two array drives to start and see if there are any file collisions for duplicate directories.
Cosmos-server: Not sure if I want an all-in-one manager, but it's definitely worth looking into to simplify deployment. I don't think it allows for building from github repo, so this might be a non-starter.
Results
I was hopeful that I could have a single master repository that contained my nixos-config for any of my devices, but also the configurations for the nixos-based NAS (including all my stacks). This hasn't proven to be particularly viable for a number of reasons.
I tried running nix pkgs directly instead of having everything in docker containers because I figured the fewer layers there were, the fewer headaches and performance impacts there might be. This has been a resounding failure.
-
nix pkgs:
- Lack of standardization between the packages means that the declarative configuration parameters are different for every single program. Something as simple as setting an explicit port for 3-4 packages took like an hour of research to find the right configuration format. Stupid. Making small changes to the configuration shouldn't be anywhere near this complicated, especially when the settings I have currently are easily backed up. This isn't a win for me because it's slow, tedious, and the performance benefit it might give me is probably negligible.
- nixOS has no current way of running multiple versions of the same package with different configurations. This means that I can't have multiple instances of Sonarr and Radarr for 1080p and 4K content. I could just run those exceptions in docker but then my configuration is fractured and I didn't want that. I should have guessed this would be the case before I even started, but nix has shocked me with it's flexibility so far and I thought it would have some crafty way to achieve this. Nope. maybe. Apparently nix has native containers that I'll have to look into.
-
docker: another option is to run docker and a komodo container, and then host all of my stacks from komodo. This was the original plan, and right now, it seems like it might be the only plan. With this implementation I'm basically in the same position as I am with unRAID in that komodo manages my stacks that are all stored on github. This is fine but eliminates what I hoped would be performance improvements and some declarative configurations. Oh well, can't win 'em all.
Conclusion
I'm not even sure it's practical to move away from unRAID at this point. I would lose a lot of the management tools that, while not essential, could be useful to provide a heads-up effortlessly. I could build a dashboard to replicate this functionality, but at a certain point, why?
For the time being I'm going to treat this as future ambition that I'd like to work towards, but leave as a very low priority. I'd like to continue researching the options that nixOS might offer, but it's not realistic to even consider implementing it until unRAID presents some issues that make it no longer viable.
UPDATE
I'm probably still gonna do it and abandon the duplicate services for now. I have to figure out directory permissions as sab/sonarr aren't enjoying the /data path. I think once I have a separately filesystem mounted wide open it should be fine.
References
| Repo Name | Address | Purpose |
|---|---|---|
| nix-config | https://github.com/cratedev/nix-config | nixOS config for laptop, desktop, and NAS |
| unraid | https://github.com/cratedev/unraid | docker-compose files for unRAID |
| nix-nas | https://github.com/cratedev/nix-nas | docker-compose files for nixOS NAS |
| cosmos-server | https://github.com/azukaar/Cosmos-Server | home-server manager |