I’m running three servers: one for home automation/NVR, one for NAS/media services, and one for network/firewall services.
Does this breakdown look doable based on the hardware? Should the services be ditributed differently for better efficiency?
Server 1 and 3 are already up and running. I just received my NAS, and am trying to decide where to run each service to best take advantage of my hardware.
I’m also considering UnRaid instead of Proxmox for a NAS OS. I just chose Proxmox because I’m familiar with it, and I like the ability to snapshot. I also intend to run Proxmox Backup Server offsite at some point, and I like the PVE/PBS integration.
Any advice would be much appreciated!
Personally I would keep it simple and just run a separate NAS and run all your services in containers across the devices best suited to them. The i3 is not going to manage for Jellyfin while sharing those other services. I tried running it on an N100 and had to move it to a beefier machine(i5). Immich for example will use a lot of resources when peforming operations, just a warning.
If you mount a NAS storage for hosting the container data, you can move them between machines with minimal issues. Just make sure you run services using a docker-compose for them and keep them on the NAS.
You completely negate the need for VMs and their overhead, can still snapshot the machine if you run debian as the OS there is timeshift. Other distros have similar.
quicksync should let the i3 handle jellyfin just fine if you’re not going beyond 1080p for a couple of concurrent users. Especially if you configure the Nice values to prefer jellyfin over immich.
I’m not aware of the platform for the n300 because it might be worth the initial setup, and have some room to upgrade the CPU later if it causes trouble.
If OP is going for multiple systems, I’d definitely agree on making one of them a pure NAS and let a more upgradable system run the chunky stuff.
Most of my content is 4K h264. You may be right on the 1080 but I don’t have content at that resolution generally.
Worst case scenario he can always keep the N300 for other stuff if it doesn’t work out.
Had issues with docker conpose mounting NFS storage.
Seemed like it got disconnected while in use by the container.
Did resolve it eventually by manually mounting it on the host.
Any experience why?
Host OS: Debian 12
NFS server: Debian 12 in a Proxmox VM
Mount your NFS in the fstab and make sure you have docker set to wait until the mount is working. Here is a guide. https://davejansen.com/systemctl-delay-start-docker-service-until-mounts-available/
I’ve only had to delay on my N100s.
So I have the mounts set and then just use those paths in my compose. All my machines have the same paths.
Oh, I only had it when mounting with compose. I did resolve it by uaing fstab.
Oh well… I hoped you’d have a better solution. At least that works very well.
The advantages you gain with running a hypervisor on something like ZFS is immeasurable, for snapshotting, replication, snapshot backups and high availability. You don’t have to quiese machines to back them up and you can do instant COW snapshots before upgrades.
KVM doesn’t really have overhead, that’s the kernel part. Maybe a bit of RAM, but with LXCs it’s negligible.
I didn’t think OP was going the ZFS route so it wouldn’t matter on that point.
His Server 2 will be running on the red line imho so any overhead would have impact.