Awesome, go for it! ansible (more or less) is directed ssh. inventory, role, playbooks + templates, etc; for learning, definitely go for it! if you were to roll your own automation framework, you’d end up w/ansible.
Awesome, go for it! ansible (more or less) is directed ssh. inventory, role, playbooks + templates, etc; for learning, definitely go for it! if you were to roll your own automation framework, you’d end up w/ansible.
how many devices do you need to update?
ansible wants to have a home base and an inventory of devices to manage. for example, if you have a flock of Rasberry Pi’s and a server stashed under a desk somewhere, yes, ansible is 100% going to simplify your life.
ansible mgmt from a device to that same device… It might be just as easy to make backups and track your file deltas. the temptation is to use ansible so you remember what changes you made, but it can be a pia when you need to do a quick shift and have to go thru the playbook (unless you have playbooks on the ready).
what you are attempting is called high availability; it might not be worth it; usually would need three different physical devices (in a homelab situation)…a load balancer to route traffic, and two nodes to handle said traffic. to perform your storage upgrade, you pull one device out of the load balancer, do your upgrade, and then add it back in. then, you do the same for the other load balancer. this would have 100% service availability…but this is a lot of work for a one-person show!
do that for fun - you do you. however, if you can handle a few hours of downtime and don’t want to burden yourself with the long time care+feeding the above setup will require…
remember you can use USB boot, mount both your drives, and then if you are lucky, your distro (on USB) will have a disk management/cloning utility.
click click click, boom…you have bit perfect copy of small M2 on to large M2.
Do not change your small M2! power down, swap 'em, and power on! if it doesn’t work, you still have your OG M2 to boot from.
there are backup/restore utilities and other ways, each taking more and more time…but M2 is pretty quick.
For sure.
My point was more … first time, ever, you boot a raw device, a display can be handy unless you know what you are doing. Once it survives a reboot…
After that, if you need a GUI — just run an x windows server on your main rig; interact with your remote server as the client without the need of a display.
Usually it’s handy to have a display during initial setup and cfg. Also, with x windows port forwarding … you access your server gui over a network like god intended :)
A NAS serves data to clients; I know this is tilting conventional wisdom on it’s head but hear me out: go for the most inexpensive, lowest power storge-only-NAS that you can tolerate, and instead…put your money into your data transport (network) and into your clients..
As much as possible, simplify your life - move processing out of middle tiers, into client tiers.
you could probably roll your own pretty easily, just prowl around /proc etc
https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
Actually…for a NAS, your network link is your limit.
You could have 4xPCIe5 M.2’s in full-raid, saturating your bus w/64Gb/s of glory, but if you are on 1Gb/s wifi, that’s what you’ll actually get.
Still, would be fun to ssh in and dupe 1TB in seconds, just for the giggles. Do it for the fun!
Remember, it is almost always cheaper and fast enough to use a Thunderbolt / high-speed USB4/40Gbs flash drive for a quick backup.
Is it something you can address with your ISP?
Changing ISP is just not an option for most people. Sometimes a different class of service will Improve link reliability.
The other thing you could consider is some kind of mobile hotspot.
If you are hosting everything, why do your need your ISP? Is it for access to your home services outside your home?
I like how you have a home smartcard. I can’t believe many do.
Why do you think cloud operators are lying?
The azure breach is interesting in that it is vs MSFT SaaS. We’re talking produce, ready to eat meals are in the deli section!
The encryption tech in many cloud providers is typically superior to what you run at home to the point I don’t believe it is a common attack vector.
Overall, hardened containers are more secure vs bare metal as the attack vectors are radically diff.
A container should refuse to execute processes that have nothing to do with container function. For ex, there is no reason to have a super user in a container, and the underlying container host should never be accessible from the devices connecting to the containers that it hosts.
Bare metal is an emotional illusion of control esp with consumer devices between ISP gateway and bare metal.
It’s not that self hosted can’t run the same level of detect & reject cfg, it’s just that I would be surprised if it was. Securing self hosted internet facing home labs could almost be its own community and is definitely worth a discussion.
My point is that it is simpler imo to button up a virtual env and that includes a virtual network env (by defn, cloud hosting).
Well with bare metal yes, but when your architecture is virtual, configuration rises in importance as the first line of defense. So it’s not just “yum —update” and reboot to remediate a vulnerability, there is more to it; the odds of a home lab admin keeping up with that seem remote to me.
Encryption is interesting, there really is no practical difference between cloud vs self hosted encryption offerings other than an emotional response.
Regarding security issues, it will depend on the provider but one wonders if those are real or imagined issues?
Operating internet-facing services in the home, in my opinion, requires a layer-3 managed switch so that internet traffic is 100% separated from home traffic, w/attendant DMZ to bridge home<-> internet-facing services safely.
L3 managed is the simplest method to contain a penetration to just the internet-facing devices (which is still pretty bad). Cloud hosting is more manageable, but you must watch the spend.
The biggest issue is a DDoS attack on the home network, which could impact internet-facing services and home clients (streaming TV, gaming, email, etc.).
Certain cloud providers are as secure, if not more secure, than a home lab. Amazon, Google, Microsoft, et al. are responding to 0-day vulnerabilities on the reg. In a home lab, that is on you.
To me, self-hosted means you deploy, operate, and maintain your services.
Why? Varied…the most crucial reason is 1) it is fun because 2) they work.
Yeah, I agree, and ultimately shame on the tv manufacturer. However many software just won’t connect so it’s not really a plex issue. If they use a library that won’t support it…
To be fair, old ssl isn’t really ssl at all & considered to be a vulnerability by a lot of libraries.
This is a great question. The photo ecosystem is one where I haven’t found a FOSS soln that hits all the marks of subscription services. I would focus on whatever helps you search.
I do feel like if files have accurate dates in the file system and in metadata, then folders based on event make sense.
However subscription photo services are very good at automatically sorting - these dates are holidays so these pictures are probably for that holiday. Your home location is here, these pictures are over there so this must be your trip to there. These pictures have these people or animals, so these pictures are about them.
With that comes seamless integration across devices - a picture taken at time now can be seen on a tv or laptop at time +x. Etc.
I have left the FOSS photo world but am definitely interested to see where it is. With digital photography finding pictures is the real trick. using folders like a tag hierarchy at least gets you in the ball park imo. But I have no practical knowledge any more.
In general if you lose your iscsi storage you are hosed.
The way around this is replication where you write one byte to two locations and pseudo load balancing where you have an active and inactive link. When power on one storage fabric goes down you flip to the other. Iscsi isn’t really good for this use case
from a diagramming pov, remember to document the link speed at each end as well as the ethernet cable type. if your cable modem supports 10GB I would really really look at 10GB network devices pretty closely, budget allowing. I would steer cleared of managed, it's just a PIA for your setup.
You might want to experiment with modem <-> switch <-> wifi vs (modem <-> wifi <-> switch). remember wifi is just ethernet. so the order may or may not matter as much (vendor gets a vote). there does not appear to be a reason to march ethernet cable traffic thru the wifi router, but maybe there is???
def agree an 8 port switch might be better for you, use a 5 to split a single cable at a single location (say, tv + game console + speaker combo)
Remember if you need a WiFi mesh (multi access-point) to connect your devices, if possible, link the mesh backplane together via ethernet cable so that you don't chew half the speed with wi-fi backplane chatter.