This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren’t revolutionary. They’re basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it’s ultimately the customers and their data that suffer.

  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Please, enlighten me how you’d remotely service a few thousand Bitlocker-locked machines, that won’t boot far enough to get an internet connection, with non-tech-savvy users behind them. Pray tell what common “basic hygiene” practices would’ve helped, especially with Crowdstrike reportedly ignoring and bypassing the rollout policies set by their customers.

    Not saying the rest of your post is wrong, but this stood out as easily glossed over.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      what common “basic hygiene” practices would’ve helped

      Not using a proprietary, unvetted, auto-updating, 3rd party kernel module in essential systems would be a good start.

      Back in the day companies used to insist upon access to the source code for such things along with regular 3rd party code audits but these days companies are cheap and lazy and don’t care as much. They’d rather just invest in “security incident insurance” and hope for the best 🤷

      Sometimes they don’t even go that far and instead just insist upon useless indemnification clauses in software licenses. …and yes, they’re useless:

      https://www.nolo.com/legal-encyclopedia/indemnification-provisions-contracts.html#:~:text=Courts have commonly held that,knowledge of the relevant circumstances).

      (Important part indicating why they’re useless should be highlighted)

    • Howdy@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Was a windows sysadmin for a decade. We had thousands of machines with endpoint management with bitlocker encryption. (I have sincd moved on to more of into cloud kubertlnetes devops) Anything on a remote endpoint doesn’t have any basic “hygiene” solution that could remotely fix this mess automatically. I guess Intels bios remote connection (forget the name) could in theory allow at least some poor tech to remote in given there is internet connection and the company paid the xhorbant price.

      All that to say, anything with end-user machines that don’t allow it to boot is a nightmare. And since bit locker it’s even more complicated. (Hope your bitloxker key synced… Lol).

      • LrdThndr@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Bro. PXE boot image servers. You can remotely image machines from hundreds of miles away with a few clicks and all it takes on the other end is a reboot.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          With a few clicks and being connected to the company network. Leaving anyone not able to reach an office location SOL.

          • LrdThndr@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Hey, it’s not perfect, but a fix that gets you 10% of the way there is still 10% you don’t have to do by hand. Don’t let perfect be the enemy of good, my man.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I think it’s most likely a little of both. It seems like the fact most systems failed at around the same time suggests that this was the default automatic upgrade /deployment option.

    So, for sure the default option should have had upgrades staggered within an organisation. But at the same time organisations should have been ensuring they aren’t upgrading everything at once.

    As it is, the way the upgrade was deployed made the software a single point of failure that completely negated redundancies and in many cases hobbled disaster recovery plans.

    • DesertCreosote@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Speaking as someone who manages CrowdStrike in my company, we do stagger updates and turn off all the automatic things we can.

      This channel file update wasn’t something we can turn off or control. It’s handled by CrowdStrike themselves, and we confirmed that in discussions with our TAM and account manager at CrowdStrike while we were working on remediation.

  • TechNerdWizard42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    Issue is definitely corporate greed outsourcing issues to a mega monolith IT company.

    Most IT departments are idiots now. Even 15 years ago, those were the smartest nerds in most buildings. They had to know how to do it all. Now it’s just installing the corporate overlord software and the bullshit spyware. When something goes wrong, you call the vendor’s support line. That’s not IT, you’ve just outsourced all your brains to a monolith that can go at any time.

    None of my servers running windows went down. None of my infrastructure. None of the infrastructure I manage as side hustles.