Frogger, played a lot of that and had to stop myself before crossing roads for awhile after.
In my mind I was trying to optimize moving the same direction as the car and sneak in before the meeting car got there. I’m lucky to be alive.
Frogger, played a lot of that and had to stop myself before crossing roads for awhile after.
In my mind I was trying to optimize moving the same direction as the car and sneak in before the meeting car got there. I’m lucky to be alive.
If PostgreSQL is also shut down and you dont start the backup before its completely stopped it should be ok. You might need to restore to the same version of PostgreSQL and make sure it is setup the same way. If you dump the data, it is safer, both that you get a known good state, and that it can be restored to any new database. Grabbing the files as you suggest should be ok at least 90 percent of the time. But why risk it?
From personal experience, if you’re hosting Gitlab and make it available to the internet, make sure to keep it updated or your server will be super slow hosting a crypto miner within a year.
I switched from keepass to vaultwarden years ago and for my usage I wouldn't switch back.
I needed to be able to share some passwords with other people. I think the clients are much better. I like having a website available as a backup to access a password. All in one package that works well so I don't need separate mechanism to synchronize between different installations. I like the easy sharing secrets through links and not having to send in cleartext with emails or texts.
And for selfhosting I like that you only need the server only for syncing newly added secrets - if vaultwarden had to be online always I'd switch back.
Yeah, pgsql and redis are probably to much to work around, and the market too small. For those it could be useful they probably already have an installation on a server that can be used.
For my usage it’s perfectly fine running in python, so far not many daily users and not many bugs - most days nothing is reported. If I had more users or with performance telemetry enabled I might want rust. Better for the environment and I could run it on a smaller instance. That said, I believe GlitchTip is already ahead of Sentry in resource usage - I didn’t install Sentry, but I saw all the systems needed and that was the main reason for going with GlitchTip. I’m mostly OK with their license.
I installed it with Ansible a few months ago and it’s been solid. It’s really nice to see bug reports with so much detail.
At the same time I also connected my dev environment to it, and it’ s been helpful for webdev getting errors from both front- and backend in the same interface when adding features.
For dev it’s less useful to have the history saved, so I think a standalone binary without setup that’ll simply accept anything and keep in memory would be useful for a small audience.
It should be easier to port forward SMTP to the mailcow installation for incoming mail and only use NPM for the web interface.
If netbird has enough DNS support you might be able to setup all the mailcow recommended settings there so you have auto discovery from mail-clients on the netbird VPN.
Incoming mail is pretty easy to get working anywhere, but outgoing is restricted if your IP adress is in any way suspicious. Using sendgrid, authsmtp, or something similar is the easy way.
For the hardcore, finding a VPS with a company that blocks outgoing smtp as default but will unblock if you convince them you’re responsible can be fun and/or frustrating. You’ll have a mail relay there for outgoing email at the minimum but can also get incoming email via that server. The smallest possible server should be enough.