• Reygle@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    7
    ·
    4 days ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


    WHAT

    Genuine question, REALLY: What in the fuck is an otherwise “functioning adult” doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

    • merdaverse@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      ·
      4 days ago

      AI psychosis is a thing:

      cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

      It’s not very studied since it’s relatively new.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.

    • LLMhater1312@piefed.social
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 days ago

      The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already

    • Sahwa@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 days ago

      This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

      These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

      For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

      After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

      I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

      ‘I Worked on Google’s AI. My Fears Are Coming True’

      • sudo@lemmy.today
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        4 days ago

        “abuse the ai’s emotions” isn’t a thing. Full stop.

        This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

    • merdaverse@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      AI psychosis is a thing:

      cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

      It’s not very studied since it’s relatively new.