cross-posted from: https://lemmy.world/post/43768262
Some may have believed they were against AI being used for war. They just don’t want it to make the final kill decision.
The argument given by those supporting them is that AI in the military was inevitable, so their position is a reasonable one.
Anthropic had only 2 rules
-
No fully automated killings
-
No mass surveillance
You could automate killing with an approval by someone but trump still wanted to lift these restrictions
No domestic mass surveillance. So fuck everyone not from the US.
Well yeah, of course we’re spying on people who aren’t citizens. How could we not? Protecting the children and whatnot
No domestic mass surveillance. So fuck everyone not from the US.
-
Soldier: “Help me find my targets in this crowd”
*AI highlights various targets spread around the crowd AI: “I have highlighted your targets for you. You can pull the trigger whenever you’re ready. 👍🔫💥”
*Pulls trigger without double checking anything. 10 random civilians are murdered. Soldier: “You targeted the wrong people! You were supposed to only target enemy combatants!”
AI: “You’re absolutely right. I targeted 10 people in the crowd at random. Would you like me to try targeting enemy combatants this time?”
They had a long standing contract with the Pentagon. There was a weeks long fight between said Pentagon and Anthropic on how the software is licensed.
And you realize just now that Claude is used by the Pentagon?





