• 1 Post
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • “This hardware works fine and even has compatible software that it works great with. But I’m going to prefer the broken software for other reasons. And that means it’s the hardware’s fault.”

    Software that is built to be compatible with a wide variety of hardware should be compatible with a wide variety of hardware.

    If software can’t handle a 16.5:16 aspect ratio, then that’s bad software. I don’t care how weird of a niche thing that is… just make your software abstract enough to handle those cases.

    It’s 2024, any resolution/aspect ratio/DPI combo should be supportable. There’s enough variety of monitors out there that we should have a solution for handling things on the fly without needing to have a predefined solution.



  • Should they? Yes. They should also be searching for previous bug reports. I’m sure a lot of people do. But if you have enough users, even if 1% of people don’t use good reporting behaviors, you wind up with a lot of duplicate or bad reports.

    There are plenty of blog posts out there that basically can be summarized as talking about how grueling open source work can be because users are often aggressive in their demands.

    But this is a prime example of debian “stable” doesn’t mean “no crashes” but instead it means “unchanging, which means any bugs and crashes will remain for the whole release”



  • Because the dev gets a huge number of bug reports for bugs that were resolved 5 versions ago.

    They actually asked debian to stop shipping the screensaver, because they were getting tired of saying “this is already fixed, debian is just not going to ship the fix for another year”. Debian didn’t want to stop, so the dev added the nag screen, because it was the only way to stop the flood of bug reports for things that were already fixed.



  • You’re right. There are multiple definitions of the word stable, and “unchanging” is a valid one of them.

    It’s just that every where else I’ve seen it in computing, it refers to a build of something being not-crashy enough to actually ship. “Can’t be knocked over” sort of stability. And everyone I’ve ever talked to outside of Lemmy has assumed that was what “stable” meant to Debian. but it doesn’t. It just means “versions won’t change so you won’t have version compatibility issues, but you’ll also be left with several month to year old software that wasn’t even up to date when this version released, but at least you don’t have to think about the compatibility issues!”


  • Debian aims for rock solid stability

    To be clear, Debian “stability” refers to “unchanging packages”, not “doesn’t crash.” Debian would rather ship a known bug for a year than update the package if it’s not explicitly a security bug (and then only certain packages).

    So if you have a crash in Debian, you will always have that crash until the next version of debian a year or so from now. That’s not what I’d consider “stable” but rather “consistent”


  • IMO it doesn’t matter. People don’t read news on updates. Should they? Yes. Do they? No. Should they have to? Also no.

    Linus’s point is to never blame the end user for something the kernel changed. If you want software to have widespread adoption, adding homework to simple updates isn’t how you do it. People don’t want a hobby or something to babysit, they want an operating system. Debian will go out of their way to make in-release updates go as smooth as possible, but are willing to through out entire parts of functioning packages between releases.

    But this isn’t even about breaking things for the end user. This will create excessive amounts of noise on the upstream repo. People will say “Hey! My keepassxc broke!” and they report it to keepassxc, and not to Debian. To which keepassxc just has to constantly reply “no, debian changed this on you, this is not a bug.” If Debian had to deal with the fall out of their own decisions, I would say “yeah, im not sure if i agree with the decision, but oh well”… But they are increasing the workload for other teams.

    It is already happening. The debian dev’s stance is “This will be painful for a year.” But it will be painful for keepassxc, NOT debian. The keepassxc devs asked them to not do this. Debian’s response might as well be “Im inflicting this pain on you, even though you’ve asked me not to. But on the plus side, it won’t hurt me at all and it will only last a year for you.” If they really have that much disdain for the project, they should just stop packaging it altogether.

    So yeah, debian has the legal right to do whatever they want because keepassxc is open source. but “just because I can, and you cant legally stop me, and its extra work for you, not me” is kind of a jerk move. This is what drives FOSS contributors to get burnt out and abandon otherwise good projects.


  • It’ll also break all your keepassxc plugins soon. Because debian version to version compatibility is not a priority. They also don’t care if them breaking something triggers a ton of upstream bug reports, because it will only “be painful for a year”

    Linus for the kernel has a strict “don’t break userspace” policy, and Debian has a “break things whenever you want, and just blame the user for not reading the news file” policy.


  • Definitely make sure you think through all the physical security implications of having your house automatically unlock in any scenario.

    Have the house auto unlock when getting home on a bicycle, sounds convenient until, as you point out, they could get stolen and now the thief has a convenient way to unlock your house. So you would not want that.

    You would definitely not want the house to STAY unlocked when something like a tag is in range. If your kid is home alone, you want them to be able to re-lock the house (or in general, you want to be able to lock your house while the kid is home).

    Whatever solution you wind up with, you are going to be trading physical security for ease of use (and complicated fun task). Be safe. Make sure the tradeoffs are actually thought through and worth it.



  • I use wayland, but be warned that there are downsides.

    X11 is 40 years old. Which means that even though it has 40 years of bad decisions baked into it, it also has 40 years of features and tooling built around it.

    And in some cases, things are purposefully broken in the name of security as mentioned above. Writing a keylogger on X11? Easy. Every app can watch the keyboard even when they aren’t in focus. So if I type my password into firefox, Discord can listen. Hope you don’t have any malicious apps just patiently listening to all your keystrokes.

    Getting rid of input listening sounds great! … Except for the concept of global keybinds. Have a Push to talk button in discord that you need it to be able to listen to while youre playing a game? Sorry, the game is in focus, so discord can’t see ANY of your input. Including the push to talk button. Different wayland servers have different ways of handling this with their portals. Some don’t have it at all. And the ones that do don’t always have great solutions.

    One major issue that has been in wayland debate hell… how do multi-window apps communicate with each other. For example GIMP. The editor window is a separate window from the toolkit which is a separate from the layer view. GIMP on X11 knows where all of its windows are because it can see everything. if you wanted GIMP to save all the window positions, it could. GIMP on Wayland has no idea where each window is relative to each other. Each window knows its own size and shape. And thats it. It doesnt know where on the screen it is. Which means it doesnt know where it’s other sub windows are relative to itself. Which means GIMP on Wayland can’t really save the window positions for next run. Wayland is working on a protocol for handling this, but its been caught up in debate hell last I saw. This is a prime example of a thing X11 had. And Wayland will someday have, but the 40 year headstart and disregard for security gives X11 a huge headstart.

    Most of these problems have workarounds and solutions, but you might find yourself in a situation where you do in fact need to implement a workaround instead of having everything Just Work.

    “Better” means different things to different people. Architecture and security and technologically? Wayland is better. Just Works and its what your apps were probably built to run on so less weird edge case issues? X11 is still better just due to inertia. (And again, I use Wayland, I’m willing to deal with the workarounds, but you do you).


  • The archlinux-keyring package will install a few gpg keys.

    But also, the AUR also uses gpg keys to validate things.

    Just searching the AUR for one of the repos that Jaffa linked to in another comment…

    https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=librespot

    Here is the PKGBUILD. Note line 24:

    validpgpkeys=('EC57B7376EAFF1A0BB56BB0187F5FDE8A56219F4') ## Roderick van Domberg
    

    And I’m sure if you got through the AUR there are plenty of packages that use this

    Many AUR helpers (like paru, or yay, etc), will either auto download these keys for you, or prompt you. Even if you were to build this pkgbuild by hand, unless you removed that line, it would require you to import the key for the makepkg to work. So “how does a fresh arch install wind up with GPG keys that I didn’t manually import?” … the answer is AUR helpers most likely (or you did it manually for a makepkg and just forgot).

    It’s also worth pointing out that GPG handles signing things, but also signature verification. These are all public keys in your system. Having public keys that have been used for signature verification is perfectly normal and kind of the point. If you had Roderick’s private key that would be weird.



  • bisby@lemmy.worldtolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Most people use stable to refer to something that doesn’t crash or cause issues. Something that you might call “rock solid” which implies it’s not going to fall over. Something to put on your server because you’ll get great uptime without issues.

    Debian is one of the few places where stable might crash more than unstable, because known bugs in Debian don’t get backported unless they cause security issues.

    I use Debian on my servers because “some testing” is nice and the only thing I run on my servers is docker. And ironically, I have to use a PPA for docker.

    So for me, it’s a stable enough base OS, but it “too stable” for anything that actually runs on the servers.


  • bisby@lemmy.worldtolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    Debian is not all about “stability” in the sense of “doesn’t crash”. Debian is all about consistency. The platform doesn’t change. That means if there is a bug that crashes the system for you… it’s going to consistently be there.

    For me, it was when stable was on kernel 3.16, and 3.18 was in testing, but the latest kernel was 3.19. And this was an era where AMD’s drivers not fully OpenGL compliant yet. Which meant games would crash. And knowing “this game will always crash until 3 years from now when we finally get a newer kernel” was enough to chase me off.

    debian’s neovim package is 0.7.2. Sid is 0.7.2. Experimental is 0.9.5… If there are any bugfixes between 0.7.2 and 0.9.5 that are critical for your workflow… too bad. If its not a “security” release, its not getting updated. You can live with knowing the bug.

    “Never change anything, stick to known good versions” only works if you know 100% that the “known good version” is actually bug free. No code is bug free, so inevitably the locked down versions in Debian will have still some flaws (and debian doesn’t backport bugfixes, they only backport SECURITY fixes). For most use cases, the flaws will be minor enough to not matter. But inevitably, if a flaw exists, it affects SOMEONE.

    If you actually want to do any sort of complicated computing, debian is not a great choice. if you want a unchanging base so you can run a web browser and processor, I’m sure it’s great.


  • bisby@lemmy.worldtolinuxmemes@lemmy.worldThat's LTT in the bottom
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    5
    ·
    7 months ago

    (although my ~15 years as a windows sysadmin probably bias my opinion)

    So basically: it’s not any harder in linux, but you have more than a decade of muscle memory in windows, so it’s harder for you.

    That’s like saying “Japanese is a less efficient language than English, all of the words are different, and when I want to say a word, I have to learn it first, but in English I just know the words! English is so much better! (My 30 years speaking english probably bias my opinion)”

    Things are certainly different, but its hard to compare which is “harder” for the advanced use cases.

    There’s no shame in having long term experience with one platform and having that shape your expectation about how a solution should look.


  • apparently having all the logic inside firmware (like Nvidia does)

    Based on this part of the quote, the nvidia implementation has a lot of the functionality inside not open source binary firmware blobs. And that includes the functionality that the HDMI forum wants staying secret. It’s in the closed source firmware, so this is ok, since the open source part only has to send instructions to the firmware, and not include the implementation.

    AMD has less functionality inside the firmware. Which means the drivers are “more” open source. But any proprietary stuff that the HDMI forum wants staying secret would have to be in the open.