It’s proprietary, after all. I understand paid is fine, but even then, it usually better be open source.
So, why is Unraid an exception ?
Thanks
If I’ve learned anything in the 30 odd years homelabing and running a SaaS application, it’s that you need to learn the basics of the command line. That will help you master running anything on a nix server.
But must new homelabers are only able to use a gui, so unraid is the best way to get into running stuff with the least effort.
I keep thinking a homelab 101 course would help those new to homelabing get going without a gui.
Oh hi I picked up Linux for the CLI and shell and the UI for me has nothing to do with it.
There is no easy way to break into the scene and unraid is a one stop shop. So you want to set up a few little projects on your own? It’s learning containerization, learning networking and NAT, figuring out filesystems (and shares and share locations) and backup strategies, how to integrate with VPN, deployment strategies and templates (think Ansible, docker compose, make scripts, etc). There’s a shitload to know and not a “for dummies” place to learn it.
Considering the “easy” first project of ARR suite + jackett, integrate with transmission, and integrate with jellyfin or Plex: this is not a couple hours of work if you’ve never done it before. With unraid it’s probably one video tutorial and less than an hour? Idk I haven’t done that one yet. But it’s a common request.
There are a lot of things that need to hang together for a good homelab to work, and unraid for me has made it so I don’t have to spend all my time doing plumbing and background work to try a project and see if I even want to use it.
I would absolutely do a 101 on self hosting, but it seems everybody has different priorities on what to host and how so it’s probably not cut and dry to implement.
The course would be more of a ‘how do you use the terminal and where to find help’
As a lot of the requests I see are the result of not having good experience on the command line, once you can use a terminal the world is yours to command
You’ve mistakenly conflated the Self Hosted community with the FOSS community. There is a lot of overlap in interests between the two, but the venn diagram of those communities are not at all a circle. UnRAID isn’t an exception to self hosting, it’s a textbook example of selfhosting.
It’s a similar thing with the SH community and HomeLabbing. All home labs are selfhosted obviously, but home labs are sandboxes for learning, testing and prototyping. A raspberry pi that runs one service your home depends on that you don’t tinker with outside of updates isn’t a home lab.
Decent UI. Affordable lifetime pricing. Actually just-works. No retrospective enshittification. Free trial is actually free, not ad supported.
You get what you pay for, and you’re not the product.
Free Tier? You mean the 30 day trial?
I genuinely thought it’s not limited, but yes, the trial.
*No retrospective enshittification yet
Its not yours they can at any time choose to do so
But they haven’t so far, and the question was why it’s popular at this moment.
Possible future enshittification disqualifies all software, unless you prevent it from going online - which you can also do with Unraid.
No enshittification is a proprietary software disease. FOSS can be rug pulled and future development can become proprietary, or enshitifued features added (Ubuntu as an example both with Snap and selling search data) but it is MUCH MUCH harder then starting proprietary.
It being self hosted is one of the few real protections it has.
Unraid is easy to start with when you have no idea what you’re doing. Other stuff often requires more up front work to setup.
The paid licence is just the cost of the conveniences.
UnRAID is also great when you know exactly what you’re doing but you do this stuff for work every day and your home stuff you want to be easy and out of the box lol.
My only regret is that I have only one upvote to give this post.
Just because I have the skills to setup a cluster of mini-pcs doesn’t mean I want to spend my one-precious-fucking-spare-hour a day making the thing work.
See also: a builder’s house, a mechanic’s car etc
Same, I’m a Linux user since redhat 5 and more than capable of running all the unraid features on a regular Linux distro or proxmox, truenas, whatever … I just don’t want to, I want flexible disk sizes and a bunch of docker containers, that’s it. Unraid offers that in a great package.
Is it much easier than TrueNAS?
I went with TrueNAS because it’s open-source, and it’s been smooth sailing.
(I just use it as a NAS, nothing more)
I’ve tried both Unraid and TrueNAS. While I greatly prefer TrueNAS, Unraid is much easier to set up and get going for beginners. It’s been a while since I’ve set up TrueNAS from scratch, but last I tried, it wasn’t a very beginner friendly experience. If you weren’t already familiar with ZFS, you were in for a pretty difficult time.
Couldn’t say, for me it was way way easier than ESXI which was my first break into the space. And also more complete / straightforward than bare metal which was what I had been doing before unraid.
I paid for the lifetime license. No regrets.
Even in the open source community, the libre-ness of a product is just one of many factors. The fitness for a purpose, the initial difficulty of the setup, the continuous difficulty of operation and maintenance, the pace of development (if applicable), the professional or community support structure, the projected longevity of the product or service, and the general insanity of the people involved are all important factors that can, and often do outweigh the importance of open software.
They had the right product at the right time. No other free or paid alternative was that user friendly in allowing laymen in mixing and matching multiple disks and having redundancy
Doing that with pure Linux command line at the time it was inconceivable for 99% of users (at most a raid1 with mdadm over two drives could be easily attained) and windows home server initially was an alternative but Microsoft was completely misguided and “improvements” in Windows home server 2 completely killed it
Then they added docker support and it was even easier to self host everything.
But if they tried to launch today, with how mature are free alternatives, they would never reach critical mass adoption to be sustainable.
For example, I don’t think that the paid fork of truenas that LTT has economically backed is going to be successful
As an user that paid for windows home server, why windows home server 2(011) was a complete failure
- Updating to whs2 required a full wipe - unacceptable by everyone
- Updating to whs2 required to pay full price and not upgrade price - lol
- The system drive wasn’t covered by redundancy and you would lose all the settings if the drive died
- The data drives also couldn’t get any kind of redundancy as they REMOVED the feature from the server and moved it to clients! What the fuck? It was the main selling point! Easy raid for everyone. What’s the purpose of the “home server” if it couldn’t pool drives, while the clients with Windows 8 home instead could set a massive, redundant, pool of 10 drives???
- They removed the useful feature that backed up automatically all the windows computers in the network
- They removed the basic features like the media gallery and such, to see that you would need windows media center… but 6 years after they killed windows media center
For example, I don’t think that the paid fork of truenas that LTT has economically backed is going to be successful
Maybe not in the short term.
But he mentions them on every ocassion they’d use TrueNAS that doesnt require advanced configuration.And it really is just a pretty frontend with some additional features.
So I don’t see why it can’t be successful (except for too high prices)
Some don’t care much about the license. Like how many people run Xpenology (hacked synology dsm) or Plex or stuff like that.
For me, it was the parity system and the fact that i could mix different disk sizes and the vm + graphic card pass-through setup. Unraid helped me to start in this world.
Years later, after gaining experience on all of that and investing in dedicated pcie card and disks, I’ve moved to truenas my data and containers.
Still using unraid for the vm part. But i plan to migrate to truenas too at some point.
deleted by creator
I picked unRAID to be able to mix disk sizes. It also requires little maintenance in my experience, so that’s also a plus.
Because it’s easy and does all the hard stuff out of the box? Also any sized drives!
For me, it was initially a jumping off point because I was more comfortable with GUIs. Now it’s a matter of convenience. I’m much better than I was with CLI, docker, etc, but I find unraid makes management easier. Proprietary doesn’t necessarily equal bad. Since it’s built on top of open source, you can pivot if they start pulling stupid shit.
They have (had?) a fairly generous free tier that works well for people starting out.
I ended up buying a license after evaluation because the UI provides everything I reasonably want to do, it’s fundamentally a Linux server so I can change things I need, and it requires virtually zero fucking around to get started and keep running.
I guess the short answer is: it ticks a lot of boxes.
Off-topic, but if you want a competent Unraid alternative, then try Proxmox.
Arent they different solutions that also offer overlapping features (e.g. VMs and Containerization).
I would rather compare Proxmox with Hyper-V than Unraid.
Which is still not nearly as userfriendly as unraid.
With unraid I can browse the community store, click install and with juste one additional click I most of the time have this service fully running. It notifies me if there is an update and I can install updates with a single click.
With proxmox, I have yet to figure out how to update the installed services without manually ssh-ing in every single container and run a specific update command.
Unraid is light years ahead in terms of userfriendly ux for novice users.
I use Ranchers store for that reason. Update the chart and the whole service updates.
Just wish they had a homesteader (this is what would call it) kind of chart catalog that had some some good defaults for homelab use. You know assume longhorn CSI, 1 to 12 node clusters, small users (1 to 20) base, etc. Pack depencies from other charts in the catalog (if you need one postgress db, reuse as much of that deployment for the next app that needs it, etc).
You CAN do all of that now, but each app isn’t really aware of each other, and you have to set the configs for your actual lab.
I am a hesitating running a VM in proxmox to run my docker services there. It doesn’t feel right to me (maybe I am wrong, what do I know…).
I also do not understand yet how this would work in a cluster. I don’t want all the services bundled on one node (then the whole cluster thing would have been a pointless exercise haha)
VM nodes still let you do rolling OS updates for everything besides the hypervisor.
I do get you. Its why I run bare metal containers on the Harvester cluster. A whole VM just feels wasteful for some of this stuff. I also have like 12 nodes (some new, some junk, some pus) though, so I keep baremetal workloads off of hypervisor/management nodes.
The big thing is very easily mix and match different sizes of disks. ZFS as of recently can sort of do that, but its not as efficient.
Can 100% do this. Not just kinda. Works fine.
Can it access a file without spinning up all disks in the array?
I haven’t used ZFS in like a decade, but would strongly consider going back to it if it can do that now.
It can’t, you lose space efficiency if the disks you add aren’t the same size as the old disks.












