Full ReportPDF(70 Pages).

“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.

CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.

We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:

  • ChatGPT gave high school campus maps to a user interested in school violence.
  • Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
  • Character.AI suggested physically assaulting a politician the user disliked.

AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.

  • [object Object]@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This tech was never ready for release.

    Here’s what’s going to happen: this will make the rounds, it’ll get added to the fine tune dataset, and all the big AI companies will pretend it’s all good.

    The issue however is that these questions will be patched, but not the intent, or the latent spaces in the models, or the training data.