Did you know most coyotes are illiterate?

Lemmy.ca flavor

  • 2 Posts
  • 37 Comments
Joined 8 months ago
cake
Cake day: June 7th, 2025

help-circle

  • ZFS doesn’t require more RAM (or at least not meaningfully more), it just uses it if you have it. The RAM/ARC can be turned down in the configuration if you don’t want it to do that. I think on Linux other filesystems just use the native Linux RAM cache instead(?), so it’s basically the same thing as ARC, just less prominent? Also, doesn’t ZFS have RAIDZ expansion now? Actually a lot of this article smells funny… probably because they just happen to know more about BTRFS. Doesn’t BTRFS still have the RAID5/6 write hole? I wonder what sort of setup they’re using if they’re running it on a NAS.





  • Yeah that sounds about right. It also depends on which indexers you’re using, as I imagine the more public indexers will have a higher chance of getting takedowns from trolls. It’s worth noting that I believe the running theory is that a lot of 2021-2023 articles were voluntarily deleted to save space, resulting in issues even for .nzbs that weren’t takedown’d. It’s also theorized (and outright stated sometimes) that providers do silently delete data that is rarely or never accessed as well to save space, so that can be a random issue too.

    Personally, I lean more into torrent technology because usenet can be fickle for these reasons even if you’re in the secret indexers, whereas if you’re in at least some semi-good private torrent trackers you’ll never have completion issues (just potentially slower downloads). I also feel like usenet’s scalability, future, and pricing is sort of uncertain.





  • I don’t want to write up a whole paper at the moment but I’ll note that you really shouldn’t be trusting any cloud providers with your data, because you should always be fully encrypting your data before they get their hands on it. Plasma Vaults (if you use KDE) are one way to do this, or you can use something like Cryptomator, gocryptfs, etc. Basically how it works is that you store files encrypted in one directory (/home/me/Encrypted), then transparently unencrypt that data to another mountpoint for your regular usage (/home/me/Unencrypted). Modifications in the Unencrypted directory will automatically affect the Encrypted directory through the use of magic. The cloud provider will only sync the Encrypted directory, and without the key they know nearly nothing about what your data is.

    Given this sort of workflow, you can store your data anywhere, as long as you have a nice (open-source) way of syncing to that provider that can’t introduce any further vulnerability.


  • Absolutely not trusting this. Uninstalling until we know more, and ideally just getting a different solution entirely. A new account tried to impersonate Catfriend1 directly at first, and then they switched to researchxxl when someone called it out (both are new accounts). Meanwhile the original Catfriend1 has provided no information about this, and we only have the new person’s word as to what’s going on. There’s way too many red flags here.


  • I just want to note that Jellyfin MPV Shim exists and can do most of this MPV stuff while still getting the benefits of Jellyfin. You’re putting a lot of emphasis on Plex-specific limitations (which Jellyfin doesn’t have obviously) and transcoding (which is a FEATURE to stopgap an improper media player setup, not a limitation of Jellyfin).

    Pretty much every single “Pro” is not exclusive to pure MPV vs. Jellyfin MPV Shim, which mainly leaves you with the cons. Also as another commenter said, I set my Jellyfin up so that my friends and family can use it, and that’s its primary value to me. I feel like a lot of this post should be re-oriented towards MPV as a great media player, not against Jellyfin as a media platform.


  • I don’t think it will be a big deal to transcode MP3 to Opus as long as you’re okay with for-sure having theoretically-scuffed-up audio files. Every time an encoder has a go at the files (especially different encoders) they’ll leave little artifacting marks all over the waveforms, typically seen as little “blocks”. Are they audible? Doubtful. If you want to keep a neat and high-quality library I’d recommend collecting FLAC next time around.

    Also, this won’t work on Win11, and I don’t think you can make it transcode MP3, but if anyone happens to have slightly different requirements I’ll plug https://gitlab.com/beep_street/mkopuslibrary, which I use to keep my FLAC library in sync with a parallel Opus library for mobile use.



  • If you’re only at 10mbps upload you’ll have to be very careful about selecting microsized 1080p (~4-9mbps) or quality 720p (~6-9mbps) encodes, and even then I really wouldn’t bother. If you’re not able to get any more upload speed from your plan then you’ll either have to cancel the idea or host everything from a VPS.

    You can go with a VPS and maybe make people chip in for the storage space, but in that case I’d still lean towards either microsized 1080p encodes or 1080p WEB-DL (which are inherently efficient for the size) if you want to have a big content base without breaking the bank. E.g, these prices look pretty doable if you’ve got people that can chip in: https://hostingby.design/app-hosting/. I’m not very familiar with what VPS options are available or reputable so you’ll have to shop around. Anything with a big harddrive should pretty much work, though I’d probably recommend at least a few gigs of RAM just for Jellyfin (my long-running local instance is taking 1.3GB at the moment; no idea what the usual range might be). Also, you likely won’t be able to transcode video, so you’ll have to be a little careful about what everyone’s playback devices support.

    Edit: Also, if you’re not familiar with microsized encodes, look for groups like BHDStudio, NAN0, hallowed, TAoE, QxR, HONE, PxHD, and such. I know at least BHDStudio, NAN0, and hallowed are well-regarded, but intentionally microsizing for streaming is a relatively new concept, and it’s hard to sleuth out who’s doing a good job and who’s just crushing the hell out of the source and making a mess - especially because a lot of these groups don’t even post source<->encode comparisons (I can guess why). You can find a lot of them on TL, ATH, and HUNO, if those acronyms mean anything to you. Otherwise, a lot of these groups post completely publicly as well, since most private trackers do not allow microsizing.


  • Screen-sharing is part of chat apps nowadays. You’re fully within your rights to stay on IRC and pretend that featureful chat is not the norm these days, but that doesn’t mean society is going to move to IRC with you. Like it or not, encrypted chat apps have to become even more usable for the average person for adoption to go up. This reminds me of how all the old Linux-heads insisted that gaming was for children and that Linux didn’t need gaming. Suddenly now that Linux has gaming, adoption is going way up - what a coincidence.

    Edit: Also for the record, I have a tech-savvy friend who refuses to move to Signal until there are custom emoji reactions, of all things. You can definitely direct your ire towards these people, but the reality is some people have a certain comfort target, and convincing them to settle for less is often harder than improving the app itself.


  • Yeah h264 is the base codec (also known as AVC), x264 is the dominant encoder that encodes in that codec. So the base BDs are just plain h264, and remuxes will take that h264 and put it into an mkv container. Colloquially, people tag WEB-DL and BDs/remuxes as “h264” as they’re raw/untampered-with, and anything that’s been encoded by a person as “x264”. Same thing for h265/HEVC and x265, and same for h266/VVC and x266.


  • As an idea, I use an SSD as a “Default Download Directory” within qBittorrent itself, and then qB automatically moves it to a HDD when the download is fully finished. I do this because I want the write to be sequential going into my ZFS pool, since ZFS has no defragmentation capabilities.

    Hardlinks are only important if you want to continue seeding the media in its original form and also have a cleaned-up/renamed copy in your Jellyfin library. If you’re going to continue to seed from the HDD, it doesn’t matter that the initial download is done on the SSD. The *arr stack will make the hardlink only after the download is finished.


  • Yep, fully agree. At least BluRays still exist for now. Building a beefy NAS and collecting full BluRay disks allows us to brute force the picture quality through sheer bitrate at least. There are a number of other problems to think about as well before we even get to the encoder stage, such as many (most?) 4k movies/TV shows being mastered in 2k (aka 1080p) and then upscaled to 4k. Not to mention a lot of 2k BluRays are upscaled from 720p! It just goes on and on. As a whole, we’re barely using the capabilities of true 4k in our current day. Most of this UHD/4k “quality” craze is being driven by HDR, which also has its own share of design/cultural problems. The more you dig into all this stuff the worse it gets. 4k is billed as “the last resolution we’ll ever need”, which IMO is probably true, but they don’t tell you that the 4k discs they’re selling you aren’t really 4k.


  • The nice thing is that Linux is always improving and Windows is always in retrograde. The more users Linux has, the faster it will improve. If the current state of Linux is acceptable enough for you as a user, then it should be possible to get your foot in the door and ride the wave upwards. If not, wait for the wave to reach your comfort level. People always say <CURRENT_YEAR> is the year of the Linux desktop but IMO the real year of the Linux desktop was like 4 or 5 years ago now, and hopefully that captured momentum will keep going until critical mass is achieved (optimistically, I think we’re basically already there).