That’s exactly why they’re changing the license. The problem with Swanstation are the developers. Retroarch in general has some pretty horrible people maintaining it and this isn’t the first time they’ve harassed an emulator dev over nothing.
That’s exactly why they’re changing the license. The problem with Swanstation are the developers. Retroarch in general has some pretty horrible people maintaining it and this isn’t the first time they’ve harassed an emulator dev over nothing.
C has not aged well, despite its popularity in many applications. I’m grateful for the incredible body of work that kernel developers have assembled over the decades, but there are some very useful aspects of rust that might help alleviate some of the hurdles that aspiring contributors face. This was not a push by rust evangelists, but an attempt to enable modernization efforts at least for new driver development. If it doesn’t work out, that’s fair enough but I’m grateful for the willingness - especially of Linus - to try something new.
If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.
That’s not entirely accurate. Google’s influence on the web has grown even beyond the web browser engine majority share (which is bad enough in itself). They offer one of the most popular web frameworks and run several of the most popular websites. There is almost no way to compete when the market leader is simultaneously the developer and the major user of new features. Of course everyone else is going to switch to using your browser engine. What else are they gonna do? There are even websites now that just check the user agent string and refuse service if you don’t use a chromium based browser. Shit’s fucked.
The whole premise of this discussion was about technological progress and growth going by your initial comment. That means refining existing models and training new ones, which is going to cost a lot of energy. The way this industry is going, even privacy conscious usage of open source models will contribute to the insane energy usage by creating demand and popularizing the technology.
Do we really need to grow our energy consumption as a society by such a disproportionate amount?
With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?
Grouping seasons into a series folder doesn’t work well in some cases, because that’s not the way they are released in Japan. A new season is (most of the time) effectively an entire new show entry. Show seasons are mostly a north american thing. No matter which software you use, there’s always going to be some minor issues if you group seasons into one entry.
Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.
Shoko also has rate limits. The problem is that AniDB does rate limiting in an extremely stupid way for a UDP API and doesn’t even have the decency to define clear time limits.
The only thing that’s slow is dnf’s repository check and some migration scripts in certain fedora packages. If that’s the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.
I tried using connman to setup a wireguard connection once. It was not a good experience and ultimately led nowhere, due to missing feature support.
If anything, he gets most of his inspiration from MacOS.
The joke in the OP stops at the beginning of the joke explanation. If you just share your honest opinion like that in a shitposting community, you can’t expect everyone to “play along” with your “joke”.
Pretty sure that the registry path for official images is “library” (at least it used to be). So it should be “docker.io/library/debian”, though I can’t double check at the moment.
I want to do that, but not because of Flatpak. That’s incredibly far down the list of things I find offensive in my professional life. At the very least it does fulfill some sort of purpose and also doesn’t cost any money to use.
You mean hiding their public IP? I guess that’s a feature.
That’s what a firewall and a DNS service is for respectively, imho. As long as you get an IPv6 prefix from your ISP, you can expose as many devices or services to the public as you want, by just allowing incoming traffic to a listening port. That was sort of the whole point of having a large enough address space when moving away from v4. Maybe it’s just me but reading stuff about “private AI” on a website where the relation to the product is not immediately obvious, makes me question their legitimacy.
The more I look at their site, the more it reads like a sales pitch for IPv6, which sounds kind of expensive at $6-10 a month.
What problem does this solve? Do ISPs not provide IPv6 prefixes anymore?
I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.