catculation@lemmy.zip to Technology@lemmy.worldEnglish · 9 months agoGoogle explains Gemini’s “embarrassing” AI pictures of diverse Naziswww.theverge.comexternal-linkmessage-square20fedilinkarrow-up129arrow-down13
arrow-up126arrow-down1external-linkGoogle explains Gemini’s “embarrassing” AI pictures of diverse Naziswww.theverge.comcatculation@lemmy.zip to Technology@lemmy.worldEnglish · 9 months agomessage-square20fedilink
minus-squarejacksilver@lemmy.worldlinkfedilinkEnglisharrow-up8·9 months agoIt’s done because the underlying training data is heavily biased to begin with. It’s been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/. So they do this to try to correct for biases in their training data. It’s a terrible idea, and shows the rocky path forward for GenAI, but it’s easier than actually fixing the problem ¯\_(ツ)_/¯
It’s done because the underlying training data is heavily biased to begin with. It’s been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/.
So they do this to try to correct for biases in their training data. It’s a terrible idea, and shows the rocky path forward for GenAI, but it’s easier than actually fixing the problem ¯\_(ツ)_/¯