an AI resume screener had been trained on CVs of employees already at the firm, giving people extra marks if they listed “baseball” or “basketball” – hobbies that were linked to more successful staff, often men. Those who mentioned “softball” – typically women – were downgraded.

Marginalised groups often “fall through the cracks, because they have different hobbies, they went to different schools”

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Of course AI does has bias with casual racism and sexism. It’s been trained on a whole workforce that’s gone through the same.

    I’ve gotten calls for jobs I’m way underqualified for with some sneaky tricks, which I’ll hint involves providing a resume that looks normal to human eyes, but when reduced to plaintext essentially regurgitates the job posting in full for a machine to read. Of course I don’t make it past 1 or 2 interviews in such cases but just a tip for my fellow Lemmings going through the bullshit process.

    • spujb@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      fucking bonkers that institutionalized racism can exist to such a degree that it shows up IN OUR COMPUTERS.

      we’re so racist we made the computers discriminatory too.

      • TheMurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I don’t think you know how LLM’s are trained then. It can become racist by mistake.

        An example is, that there’s 100.000 white people and 50.000 black people in a society. The statistic shows that there has been hired 50% more white people than black. What does this tell you?

        Obvious! There’s also 50% more white people to begin with, so black and white people are hired at the same rate! But what does the AI see?

        It sees 50% increase in hiring white people. And then it can lean towards doing the same.

        You see how this was / is in no way racist, but it ends up as it, as a consequence of something completely different.

        TLDR People are still racist though, but it’s not always why the AI is.

        • Nollij@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          I suppose it depends on how you define by mistake. Your example is an odd bit of narrowing the dataset, which I would certainly describe as an unintended error in the design. But the original is more pertinent- it wasn’t intended to be sexist (etc). But since it was designed to mimic us, it also copied our bad decisions.