ModerateImprovement@sh.itjust.works to Technology@lemmy.worldEnglish · 2 months agoThe rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?theconversation.comexternal-linkmessage-square12fedilinkarrow-up125arrow-down10
arrow-up125arrow-down1external-linkThe rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?theconversation.comModerateImprovement@sh.itjust.works to Technology@lemmy.worldEnglish · 2 months agomessage-square12fedilink
minus-squareNeoNachtwaechter@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·2 months ago When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable? The person who allowed the AI to make these decisions autonomously. We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws: In principle, the AI is a non-person and therefore a person must take responsibility.
minus-squareNomecks@lemmy.calinkfedilinkEnglisharrow-up1·edit-22 months agoThe whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.
The person who allowed the AI to make these decisions autonomously.
We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:
In principle, the AI is a non-person and therefore a person must take responsibility.
The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.