• 3 Posts
  • 162 Comments
Joined 3 years ago
cake
Cake day: January 23rd, 2022

help-circle


  • It will cause a critical error during boot if the device isn’t given the nofail mount option, which is not included in the defaults option, and then fails to mount. For more details, look in the fstab(5) man page, and for even more detail, the mount(8) man page.

    Found that out for myself when not having my external harddrive enclosure turned on with a formatted drive in it caused the pc to boot into recovery mode (it was not the primary drive). I had just copy-pasted the options from my root partition, thinking I could take the shortcut instead of reading documentation.

    There’s probably other ways that a borked fstab can cause a fail to boot, but that’s just the one I know of from experience.


  • BaumGeist@lemmy.mltolinuxmemes@lemmy.worldAverage systemd debate
    link
    fedilink
    arrow-up
    24
    arrow-down
    3
    ·
    10 days ago

    To the feature creep: that’s kind of the point. Why have a million little configs, when I could have one big one? Don’t answer that, it’s rhetorical. I get that there are use cases, but the average user doesn’t like having to tweak every component of the OS separately before getting to doom-scrolling.

    And that feature creep and large-scale adoption inevitably has led to a wider attack surface with more targets, so ofc there will be more CVEs, which—by the way—is a terrible metric of relative security.

    You know what has 0 CVEs? DVWA.

    You know what has more CVEs and a higher level of privilege than systemd? The linux kernel.

    And don’tme get started on how bughunters can abuse CVEs for a quick buck. Seriously: these people’s job is seeing how they can abuse systems to get unintended outcomes that benefit them, why would we expect CVEs to be special?

    TL;DR: That point is akin to Trump’s argument that COVID testing was bad because it led to more active cases (implied: being discovered).


  • I’m gonna laugh if it’s something as simple as a botched fstab config.

    In the past, it’s usually been the case that the more ignorant I am about the computer system, the stronger my opinions are.

    When I first started trying out Linux, I was pissed at it and would regularly rant to anyone who would listen. All because my laptop wouldn’t properly sleep: it would turn off, then in a few minutes come back on; turns out the WiFi card had a power setting that was causing it to wake the computer up from sleep.

    After a year of avoiding the laptop, a friend who was visiting from out of town and uses Arch btw took one look at it, diagnosed and fixed it in minutes. I felt like a jackass for blaming the linux world for intel’s non-free WiFi driver being shit. (in my defense, I had never needed to toggle this setting when the laptop was originally running Windows).

    The worst part is that I’m a sysadmin, diagnosing and fixing computer problems should be my specialty. Instead I failed to put in the minimum amount of effort and just wrote the entire thing off as a lost cause. Easier then questioning my own infallibility, I suppose.









  • I have a tinkering laptop set up with Fedora, DNF is as simple as APT and friendlier imo. I’ve switched to Nala (an APT wrapper that enables concurrent downloads) on my Debian PCs. YMMV.

    Simply put: every distro needs its own package manager because the distros handle packages differently, from the way software is bundled and distributed, to where files reside in the filesystem.

    E.g. APT is so friendly because of how rigid Debian is about the structure and info that is bundled within the .deb archive, which Pacman users tend to consider as unnecessarily restrictive bloat that impairs download/installation times. Meanwhile, yay (and other AUR helper programs) compiles the packages from source.

    Although there are some that work across distros, like Nix or Homebrew. Plus there’s always flatpak or AppImages or (shudder) Snaps.

    And of course, if you want people to think you’re basically a programmer, there’s always

    $ git clone <git repository>
    $ cd <git repository>
    $ sudo make install
    

    (for software that is packaged with a Makefile)







  • iw dev <interface> station dump will show every metric about the connection, including the signal strength and average signal strength.

    It won’t show it as an ascii graphic as with nmcli, but it shouldn’t be hard to create a wrapper script to grep that info and convert it to a simplified output if you’re willing to put in the effort of understanding the dBm numbers.

    E.g. -10 dBm is the maximum possible and -100 dBm is the minimum (for the 802.11 spec), but the scale is logarithmic so -90 dBm is 10x stronger than the absolute minimum needed for connectivity, and I can only get ~-20 dBm with my laptop touching the AP.

    Basically my point is that the good ol’ “bars” method of demonstrating connection strength was arbitrarily decided and isn’t closely tied to connection quality. This way you get to decide what numbers you want to equate to a 100% connection.


  • I’m a big fan of the idea of efficient computing, and I think we’d see more power savings at the End Users based on hardware. I don’t need an intel i9-nteen50 and a Geforce 4090 to mindlessly ingest videos or browse lemmy. In fact, I could get away with that using less power than my phone uses; we really should move to the ARM model of low power cores suitable for most tasks and performance cores that only turn on when necessary. Pair that with less bloatware and you’re getting maximum performance per instruction run.

    SoCs also have the benefit of power efficient GPU and memory, while standardizing hardware so programmers can optimize to the platform again instead of getting lost in APIs and driver bloat.

    The only downside is the difficulty of upgrading hardware, but CPUs (and GPUs) are basically blackboxes to the End User already and no one complains about not being able to upgrade just the L1 cache (or vram).

    Imagine a future where most end user MOBOs are essentially just a socket for a socketed-SoC standard, some m.2 ports, and of course the PCI slots (with the usual hardwired ports for peripherals). Desktops/laptops would generate less waste heat, computers would use less electricity, graphical software developement would be less of a fustercluck (imagine the manhours saved), there’d be less e-waste (imagine not needing a new mobo for the new chipset if you want to upgrade your cpu after 5 years), you’d be able to upgrade laptop PUs.

    Of course the actual implementation of such a standard would necessarily get fuckered by competing interests and people who only want to see the numbers go up (both profit-wise and performance-wise) and we’d be back where we are now… But a gal can dream.


  • From an outsider perspective (I haven’t used Nix at all), the downsides I see are that it’s extra software on top of the defaults for any given distro, it’s not optimized for the distro (meaning it might pull in dependencies that already exist or not use distro specific APIs/libs), and it doesn’t adhere to the motivations of the distro (e.g. not adhering to the DFSGs for Debian).

    And of course, most of the packages are community maintained and there’s the immutability, which might be a hinderance to some use cases, but not for me.

    All in all, not really the worst if you’re not worried about space or getting the absolute most in performance and not an ideologue, but it’s enough to make me stick with APT. I chose Debian because of its commitment to FOSS, not the stability nor performance.