• 0 Posts
  • 129 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle

  • Third, Musk deflects from accusations he’s a Nazi (“that’s a crazy thing to say”) but he never responds by saying “What Hitler did was horrible and I’m not a Nazi and detest their ideology” which is what someone would say if not a Nazi.

    This is the most important point, IMO. Fascists who want mainstream acceptance know not to have swastika tattoos and not to openly say they love Hitler. They will always try to have some plausible deniability. Don’t get dragged into their bullshit arguments. There’s no point in debating whether the nazi salute was some other motion that was misinterpreted. Even if it was, the first thing a non-nazi would do would be to clarify that they are not a nazi and don’t want nazis to think they’re their allies. Even if Musk had completely inadvertently stumbled upon the love and support of the nazis via a series of misunderstandings (lol), at this point in time he is deliberately choosing to be part of them.

    Here is Musk at 3:08:01 saying he’s not a nazi… and then going on to say you’re not a nazi unless you’re literally invading Poland and doing the holocaust. That is literally the only objectionable thing about the nazis. Not their “fashion sense or mannerisms”. Yes that was a direct quote. There is really only one type of person that would not mention as objectionable the nazi ideology or all the acts of violence that are not at the same scale as the holocaust.



  • There’s no reason to be rude and insulting. It doesn’t make the other person look lazy; it just makes you look bad, especially when you end up being wrong because you didn’t do any research either. The article is garbage. It’s obviously written by someone who wants to talk about why they don’t like bcachefs, which would be fine, but they make it look like that’s why Linus wanted to remove bcachefs, which is a blatant lie.

    Despite this, it has become clear that BcacheFS is rather unstable, with frequent and extensive patches being submitted to the point where [Linus Torvalds] in August of last year pushed back against it, as well as expressing regret for merging BcacheFS into mainline Linux.

    But if we click on the article’s own source in the quote we see the message (emphasis mine):

    Yeah, no, enough is enough. The last pull was already big.

    This is too big, it touches non-bcachefs stuff, and it’s not even remotely some kind of regression.

    At some point “fix something” just turns into development, and this is that point.

    Nobody sane uses bcachefs and expects it to be stable, so every single user is an experimental site.

    The bcachefs patches have become these kinds of "lots of development during the release cycles rather than before it", to the point where I’m starting to regret merging bcachefs.

    If bcachefs can’t work sanely within the normal upstream kernel release schedule, maybe it shouldn’t be in the normal upstream kernel.

    This is getting beyond ridiculous.

    Stability has absolutely nothing to do with it. On the contrary, bcachefs is explicitly expected to be unstable. The entire thing is about the developer, Kent Overstreet, refusing to follow the linux development schedule and pushing features during a period where strictly bug fixes are allowed. This point is reiterated in the rest of the thread if anyone is having doubts about whether it is stated clearly enough in the above message alone.



  • patatahooligan@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    7
    ·
    6 months ago

    I see a few top level comments agreeing with the sentiment that users are being entitled or abusive, but what are they actually referring to? The linked image certainly has no evidence of such behavior. Someone who claims to be the developer filed a deletion request for the duckstation-git AUR package on the AUR and they say:

    Every time, it turns into abuse towards me, as you can also see in the comments for the package.

    I read through a few pages of the comments here and they’re mostly people talking about fixing issues with the package, and what to do about the dev purposely breaking the build… I only found a single message that could be called abuse:

    @eugene, not really but i suspect it’s an uphill battle, check the commit message: https://github.com/stenzek/duckstation/commit/30df16cc767297c544e1311a3de4d10da30fe00c

    FWIW, I’m moving to pcsx-redux, I rather run a little bit less advanced PSX emulator than software by this upstream asshat. Regardless, much thanks for maintaining the AUR package so far.

    And even this is not a good example of what stenzek is describing. For one, it’s obviously a reaction to stenzek’s hostile changes and not the sort of user coming for support and being abusive that stenzek is talking about. The user is also explicitly moving to a different emulator and not expecting any change from duckstation.




  • <package>.install scripts which don’t have to be explicitly mentioned in the PKGBUILD if it shares the same name as the package.

    Can you show a reproducible example of this? I couldn’t get a <package>.install included in a test package I made without explicitly adding it as install=<package>.install.

    Most people claim they read the PKGBUILD (which I don’t believe tbh)

    If you don’t trust people to read PKGBUILD’s I’m curious which form of software installation (outside of official repositories) you find safe.







  • Of course they’re not “three laws safe”. They’re black boxes that spit out text. We don’t have enough understanding and control over how they work to force them to comply with the three laws of robotics, and the LLMs themselves do not have the reasoning capability or the consistency to enforce them even if we prompt them to.


  • Many times these keys are obtained illegitimately and they end up being refunded. In other cases the key is bought from another region so the devs do get some money, but far less than they would from a regular purchase.

    I’m not sure exactly how the illegitimate keys are obtained, though. Maybe in trying to not pay the publisher you end up rewarding someone who steals peoples’ credit cards or something.



  • NVMEs are claiming sequential write speeds of several GBps (capital B as in byte). The article talks about 10Gbps (lowercase b as in bits), so 1.25GBps. Even with raw storage writes the NVME might not be the bottleneck in this scenario.

    And then there’s the fact that disk writes are buffered in RAM. These motherboards are not available yet so we’re talking about future PC builds. It is safe to say that many of them will be used in systems with 32GB RAM. If you’re idling/doing light activity while waiting for a download to finish you’ll have most of your RAM free and you would be able to get 25-30GB before storage speed becomes a factor.


  • So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard?

    The idea is that blkdiscard will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?

    I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

    After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.

    But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.


  • I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd will exit because it reached the end of the visible device space but blocks will remain untouched internally.

    The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

    Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.