• themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    4 months ago

    Falcon Sensor is one of the most popular security products in Windows servers. Practically every large company purchases Crowdstrike services to protect their servers.

    People who aren’t affected:

    • Linux and Mac servers
    • Private individuals and smaller businesess who have Windows machines that don’t buy CrowdStrike services.
    • Companies that bothered to create proper test environments for their production servers.

    People who are affected:

    Companies that use Windows machines, buy Falcon Sensor from Crowdstrike, and are too stupid/cheap to have proper update policies.

    In terms of numbers, we don’t know how many people are affected or how much it will cost. A lot. Globally. Flights were grounded, surgeries rescheduled, bank transfers and payments interrupted, and millions of employees couldn’t turn on their computers this morning.

      • themeatbridge@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        4 months ago

        “We need to allocate our available budget to profit-generating processes. This just seems like a luxury we can’t afford.”

        -thousands of overpaid dipshits, yesterday.

    • Morphit @feddit.uk
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?

      • Morphit @feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        This doesn’t really answer my question but Crowdstrike do explain a bit here: https://www.crowdstrike.com/blog/technical-details-on-todays-outage/

        These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.

        I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.

        Anyway, they should rewrite it in Rust.