For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
Yeah I’ve been using it for about a year and half or so on my main devices and it’s been wonderful. I’m likely going to down the list of supported providers from the gluetun docs and decide from there. Throwing my torrents and all that behind a vpn was the catalyst for signing up so I’ll continue to look for that support first and everything else is secondary.
I’m pretty sure it’s entirely disabled. Their announcement post says it’s being removed and doesn’t call out any exceptions.
I run my clients through a gluetun container with forwarding set up and ever since their announced end of support date (July I think?) I have had 0B uploaded for any of my trackers.
E: realized you may be asking about proton, oops
Wow this is great. I’ve been having trouble getting exit nodes working properly with these two. Sad that mullvad dropped port forwarding though so I’m not sure if I’ll stay with them.
I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.
I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.
It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.
Why do you think AdGuard is better than Pihole? I’m not upset with the job Pihole is doing but always looking for improvements.
I’m assuming you installed it directly to the container vs running docker in there?
I have been debating making the jump from docker in a VM to a container, but I’ve been maintaining Nextcloud in docker the entire time I’ve been using it and not had any issues. The interface can be a little slow at times but I’m usually not in there for long. I’m not sure it’s worth it to have to essentially rearchitect mely setup for that.
All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn’t a non hacky way to mount a share to an LXC container, has that changed?
Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.
My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.
I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)
All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.
It’s only me they does all this though, I set it up this way for funsies.
Agreed. I haven’t come across any instances I care to participate in that have that enabled though.
This is ultimately why I decided to roll my own instance. I’m keeping my backup here though in case I mess something up, but full control is nice to have.
@synae@lemmy.sdf.org is correct, you can pass the values through that part of the UI. I used to do it that way and had Portainer watching my main branch to auto pull/deploy updates but recently moved away from it because I don’t deploy everything to 1 server and linking Portainer instances together was hit or miss for me.
Edit: I just deployed it like this (I hit deploy after taking the screenshot) and confirmed both inside the container that it sees everything as well as checking where Portainer drops the files on disk (it uses stack.env
)
I don’t know why I did all that, but do with it what you will lol
This looks great. Gonna give it a whirl this weekend
You can already do this. You can specify an env file or use the default .env
file.
The compose file would look like this:
environment:
PUBLIC_RADARR_API_KEY: ${PUBLIC_RADARR_API_KEY}
PUBLIC_RADARR_BASE_URL: ${PUBLIC_RADARR_BASE_URL}
PUBLIC_SONARR_API_KEY: ${PUBLIC_SONARR_API_KEY}
PUBLIC_SONARR_BASE_URL: ${PUBLIC_SONARR_BASE_URL}
PUBLIC_JELLYFIN_API_KEY: ${PUBLIC_JELLYFIN_API_KEY}
PUBLIC_JELLYFIN_URL: ${PUBLIC_JELLYFIN_URL}
And your .env
file would look like this:
PUBLIC_RADARR_API_KEY=yourapikeyhere
PUBLIC_RADARR_BASE_URL=http://127.0.0.1:7878
PUBLIC_SONARR_API_KEY=yourapikeyhere
PUBLIC_SONARR_BASE_URL=http://127.0.0.1:8989
PUBLIC_JELLYFIN_API_KEY=yourapikeyhere
PUBLIC_JELLYFIN_URL=http://127.0.0.1:8096
This is how I do all of my compose files and then I throw .env
in .gitignore
and throw it into a local forgejo instance.
I don’t do it all in one compose file out of preference, but as others have said Gluetun + your preferred torrent client with all networking going to Gluetun. I’ve been running this way with deluge for a while now and it’s been solid as a rock.
Are these people you trust? I would do Jellyfin and expose it to them via tailscale. Might be annoying for them to have to run tailscale but no chance I’m serving media directly from my house.
I pretty much always leave stuff seeding once I get it these days. Ever since I bumped the disk space on my NAS it made it a lot easier to leave stuff instead of jockeying for space on disk.
My higher ratio items are all old shits like You Got Served lmao
A lot of people self host so they are in control. This is Plex taking away that control, plain and simple.
I don’t know how many people host completely legitimately acquired content in their libraries, but your reasoning is such a cop out. Are you gonna defend them if they start scanning libraries for potentially illegally obtained content and blocking that because it could “put them in legal hot water?”