- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
an AI resume screener had been trained on CVs of employees already at the firm, giving people extra marks if they listed “baseball” or “basketball” – hobbies that were linked to more successful staff, often men. Those who mentioned “softball” – typically women – were downgraded.
Marginalised groups often “fall through the cracks, because they have different hobbies, they went to different schools”
I don’t think you know how LLM’s are trained then. It can become racist by mistake.
An example is, that there’s 100.000 white people and 50.000 black people in a society. The statistic shows that there has been hired 50% more white people than black. What does this tell you?
Obvious! There’s also 50% more white people to begin with, so black and white people are hired at the same rate! But what does the AI see?
It sees 50% increase in hiring white people. And then it can lean towards doing the same.
You see how this was / is in no way racist, but it ends up as it, as a consequence of something completely different.
TLDR People are still racist though, but it’s not always why the AI is.
I suppose it depends on how you define by mistake. Your example is an odd bit of narrowing the dataset, which I would certainly describe as an unintended error in the design. But the original is more pertinent- it wasn’t intended to be sexist (etc). But since it was designed to mimic us, it also copied our bad decisions.
deleted by creator