Let’s tinker around and accidentally break something.
and debug it until you have to reinstall your entire stack from scarch
Are you implying it’s possible to debug without having to reinstall from scratch? Preposterous! 😂
Scarched arth
Guess this is a good time to test my infrastructure automation.
Have you tried introducing unnecessary complexity?
unnecessary complexity?
I can help with that. It’s a skill I have. LOL
If you know how your setup works, then that’s a great time for another project that breaks everything.
logging is probably down
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises another one of your systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
Ah. The approach that squirrel@piefed.zip suggested. ;)
Thanks for the tutorial though.
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
*furiously adds a new item to the TODO list*
Does a $12 Shelly plug count?
I built an 8 outlet version of those with relays and wall outlets for… a lot less.
Tal just got the chaotic evil tag today.
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
Now try migrating all your docker containers to podman.
The rare moment when everything actually works. 😄
Quick! Break something!
Maybe try this…
Wreck it Ralph!!
Don’t worry, you’re one Docker pull away from having to look up how to manually migrate Postgres databases within running containers!
(Looks at my PaperlessNGX container still down. Still irritated.)
I feel your pain. Had to fix my immich, NC and Joplin postgresdb. Turned out, DB via NFS is a risky life. ;D
https://github.com/pgautoupgrade/docker-pgautoupgrade
Or if you are on k8s, you can use cloudnativepg.
I’m just using Docker on Proxmox, buuuut… I’m gonna look into this project. It looks like a LIFESAVER. Thank you for sharing this. You’re awesome! :D
Actually, one thing I want to do is switch from services being on a subdomain to services being on a path.
immich.myserver.com -> myserver.com/immich jellyfin.myserver.com -> myserver.com/jellyfinI’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
Because I’m an idiot. 🤦 Thanks!
In Nginx you can do rewrites so services think they are at the root.
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you? Snort or something like that?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
deleted by creator
You can always configure your vim further
or learn emacs
Then configure vim using emacs
Can’t believe nobody here mentioned nixOS so far? How about moving all of your configs in a flake and manage all of your systems with it?
I made a git repo and started putting all of my dot files in a Stow and then I forgot why I was doing it in the first place.
So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.
git commit --message 'So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.'
I already have Ansible to manage my system and I like to have the same base between my pc and my server build muscle memory.
If I was managing a pc fleet I would consider NixOS, but I don’t see the appeal right now.
Okay, but why not create more work for yourself by rebuilding everything from scratch?
Going into spring/summer that’s ideal, I wanna go places do things. Mid winter, I’m feature creeping till something breaks.
Never run:
docker compose pull docker compose down docker compose up -dRight before the end of your day. Ask me how I know 😂
compose upwill automatically recreate with newer images if the new one were pulled. so there is no need forcompose downbtwYou’re right. I got in the habit of doing that because I’m endlessly tweaking my .env files and I don’t think those reload unless you shut down first
Right before the end of your day
Oh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!












