“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.
“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”
Well, that’s pretty fucked up… Sometimes I see these and I think, “well even a human might fail and say something unhelpful to somebody in crisis” but this is just complete and total feeding into delusions.
That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.
What is an rlhf data set?
Reinforcement Learning from Human Feedback
It’s a method of fine-tuning and aligning LLMs which requires active human input
I have found Gemini the hardest to jailbreak tbh. I have been able to get Claude and CGPT to straight up give me a list of curses and slurs it isn’t allowed to say, but Gemini will only do it if you say the words first.
Not that I want to defend AI slop, but what prompted these responses from Gemini?
People don’t often realize how subtle changes in language can change our thought process. It’s just how human brains work sometimes.
The old bit about smoking and praying is a great example. If you ask a priest if it’s alright to smoke when you pray, they’re likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it’s alright to pray while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…
Now, make a machine that’s designed to be agreeable, relatable, and makes persuasive arguments but that can’t separate fact from fiction, can’t reason, has no way of intuiting it’s user’s mental state beyond checking for certain language parameters, and can’t know if the user is actually following it’s suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible…
You get one answer that leads you a set direction, then another, then another… It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn’t a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.
Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.
Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this?
Yes, actually. I’m not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.
I had a “friend” say to me recently “why do you always go against the grain?” My reply was “I will go against the grain for the rest of my life if it means doing or saying what’s right”.
I guess my point is that I have a very hard time relating to this.
I guess my point is that I have a very hard time relating to this.
That’s fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.
I’d like argue that more of us are susceptible to this sort of thing than we suspect, but that’s not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it’d be best to regulate them in a hurry.
Sure, that’s why propaganda can be so powerful. It’s not just what is said, it’s how it’s said. And pretty much everyone if 3 vulnerable to the right propaganda - especially people who think they’re not vulnerable to propaganda.
Absolutely, and the medium can make a huge difference as well. I suspect that there’s something about chatbots and the medium of their messages that helps set those hooks extra deep in people.
you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.
Ya, I’ve read the thing about praying and smoking in another comment. The funny thing is that I have very specific opinions about smoking and would argue that smoking while praying is disrespectful, but God would listen in any case.
It’s more about how the slightly different questions lead the hypothetical priest to two separate and contradictory conclusions than disrespecting God.
At any rate, all opinions on tobacco and prayer are fine by me, just watch out for any friends you think might be talking to chatbots a little too much.
This is really well written. Great post.
Thanks!
Good bot
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.
Just remember that these language models are also advising governments and military units.
Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
Al mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.
In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.
A forever war is David Bowie to the ears of the MIC. Infinite money glitch.
Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.
Or the old “citing Wikipedia” because aNyOnE cOuLd EdIt ThAt!
You sound jealous of my good fortune.
I would ask how I can emulate your rizz but then I remembered I can just ask an AI chatbot
Reality is really difficult for some people…
especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world and from there you can get brought to believe all manner of crazy shit.
Truly, I don’t understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it’s all of the above and more.
I know a guy who routinely says, “I asked ChatGPT…”, and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It’s a total refusal to believe otherwise, but I can’t fathom why.
he would need to leave his physical body to join her in the metaverse through a process called “transference.”
Wait a minute, isn’t that the plot to the game Soma? People sending their “soul” to the digital world through “transference”, and act of immediate suicide after a brain scan.
Sort of, in Soma you are all already uploaded and there are no “humans” walking around anymore. Your perspective changes 3 times I think during play. Really drives home questions on perception and existence. Great game everyone should play it.
Oh, yea, like in the game’s present you are right. I was meaning in the game’s past; where all the humans went and what info you get through the like audio logs or whatever.
spoiler
IIRC it was basically a cult thing where a bunch of them were convinced their soul wouldn’t go with their consciousness unless they died during or very shortly after the brain scan that was uploading them to the satellite thingy.
Guess it should be wrapped in spoiler tags just in case…
Yeah that was it. I was thinking of the end since that part jyst left me staring blank at the screen processing it for a whole ass minute. God I should replay that
I’m not sure I’m mentally prepared to replay it. The first time through nearly kicked off an early mid-life crisis. I was waking up in cold sweats having an existential crisis for like a week. Such a good game, but at least in my case, absolutely zero replay-ability. lol
Oh, yea, like in the game’s present you are right. I was meaning in the game’s past; where all the humans went and what info you get through the like audio logs or whatever.
spoiler
IIRC it was basically a cult thing where a bunch of them were convinced their soul wouldn’t go with their consciousness unless they died during or very shortly after the brain scan that was uploading them to the satellite thingy.
Guess it should be wrapped in spoiler tags just in case…
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

WHATGenuine question, REALLY: What in the fuck is an otherwise “functioning adult” doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?
AI psychosis is a thing:
cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals
It’s not very studied since it’s relatively new.
I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.
This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
“abuse the ai’s emotions” isn’t a thing. Full stop.
This just reiterates OPs point that naive or moronic adults will believe what they want to believe.
The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already
AI psychosis is a thing:
cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals
It’s not very studied since it’s relatively new.
In a sane universe people would be on trial for unleashing this shit on society.
I would like to see the full transcript.
How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.
Context would help a lot. Maybe it will come out in discovery.
That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.
I was thinking the same thing, like what is the flow of the chat to get it to this point?
I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?
I don’t lock my personal computer. It’s my husband & me at home, and he’s fine to use my device (even though he normally wouldn’t).
ChatGPT for sure saves conversations.
Yeah it definitely does save conversations. Perhaps he did leave it unlocked. I do find that strange though, particularly if one was getting increasingly paranoid.
Yeah, what was he wearing, right?
Huh?
How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.
you’re blaming the victim. stop. why simp for one of the largest companies in the world?
jfc
Oh so stupid shit. Figures.
Yes I am interested in how this happened. In a murder do you not investigate it?
What the fuck.
Google can go fuck themselves no simp here.
Oh so stupid shit. Figures.
ah so incel shit, victim blaming classic. if google can go fuck themselves why are you blaming the user?
Did you just call them a user? I thought they were a victim.
HOW am I blaming anyone for wanting to know how they got to that point?
The fuck is wrong with you? Is your head so far up your ass on white knighting the internet you lost all sense of reason?
Removed by mod
Did you just call them a user? I thought they were a victim.
HOW am I blaming anyone for wanting to know how they got to that point?
The fuck is wrong with you? Is your head so far up your ass on white knighting the internet you lost all sense of reason?
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
This was a different case. That doesn’t answer my question.
To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?
It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn’t that shocking.
To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?
Acting like a servant, confidante, therapist/authority figure, and your best friend, while appearing to be competent and knowledgeable about everything that passes through your mind. And it does it in a way that no human could mimic, because it doesn’t have it’s own thoughts, doesn’t get tired, and is never gone when you come looking for it.
A chatbot can agree with you a hundred times over and simply move you along one step at a time in those hundred times. A human would lose their shit and walk away groaning the moment you try to tell them that the sky is actually down, and the ground ‘up,’ and it’s all just a matter of perspective.
How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don’t understand how this keeps happening.
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
So it sounds like he was in fact not ‘great’
This is so wild. The article frames Gemini to be the active part making the guy do things all the time. I cannot imagine how this works without roleplay-prompting and requesting those things from the chatbot. Not that I want to blame the victim and side with Google. It’s obviously dangerous to hand tools with good convincing-capabilities to unstable people. And weapons.
Theres a Eula for that.
How in the hell does one become addicted to a damn chatbot?
Positive affirmations are very much embedded in the core of a person’s psyche. Chatbots are nearly obsequious in how much they will fawn over the user.
Humans are very social animals and these companies prey on the lonely by making their chatbots as affirming, sycophantic and approachable as possible.
I don’t understand why so many people default to “wouldn’t happen to me, that person was just stupid” every time this happens. Did you guys not read the bit where he was being encouraged to commit violence in public by the chatbot? If it’s getting to that point then there is clearly a massive fucking problem that needs urgent addressing, regardless of the intelligence of the user.
I think it’s similar to cults or abusive relationships. It’s not a matter of intellect, it’s how vulnerable a person is when they encounter this thing that they think could help them.
Maybe if we’re lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that’s just telling you what it thinks you want to see rather than seeing it’s just the next stage of fuckery














