Every person on the internet that responded to an earnest tech question with “
sudo rm -rf /” helped make this happen.Good on you.
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.

Wait, did reddit make a deal with Google for data mining?
Yes. Yes they did
Yeah famously for like $60 million, which lead to a shitload of users deleting and/or botting their own accounts into gibberish to try to foil it
They got what they paid for I guess.
Pretty sure it’s also going to tell people to alt f4 as well.
Have you been in a coma?
I wish
Just doing my part 🫡.
sudo rm -rf /* --no-preserve-root
i’m not going to say what it is, obviously, but i have a troll tech tip that is “MUCH” more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it’s a trolltip
so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)
Its always been a shitty meme aimed at being cruel to new users.
Somehow though people continue to spread the lie that the linux community is nice and welcoming.
Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.
Shit like that is why AI is completely unusable for any application where you need it to behave exactly as instructed. There is always the risk that it will do something unbelievably stupid and the fact that it pretends to admit fault and apologize for it after being caught should absolutely not be taken seriously. It will do it again and again as long as you give it a chance to.
It should also be sandboxed with hard restrictions that it cannot bypass and only be given access to the specific thing you need it to work on and it must be something you won’t mind if it ruins it instead. It absolutely must not be given free access to everything with instructions to not touch anything because your can bet your ass it will eventually go somewhere it wasn’t supposed to and break stuff just like it did there.
Most working animals are more trustworthy than that.
It should also be sandboxed with hard restrictions that it cannot bypass
duh… just using it in a container and that’s it. It won’t blue pill its way out.
If you gave your AI permission to run console commands without check or verification, then you did in fact give it permission to delete everything.
I didn’t install leopards ate my face Ai just for it to go and do something like this
But for real, why would the agent be given the ability to run system commands in the first place? That sounds like a gargantuan security risk.
Because “agentic”. IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.
Wait! The delveloper absolutely gave permission. Or it couldn’t have happened.
I stopped reading right there.
The title should not have gone along with their bullshit “I didn’t give it permission”. Oh you did, or it could not have happened.
Run as root or admin much dumbass?
It reminds me of that guy that gave an AI instructions in all caps, as if that was some sort of safeguard. The problem isn’t the artificial intelligence it’s the idiot biological that has decided to ride around without safety wheels.
It was the D: drive, maybe they have write permission on that drive.
I think that’s the point, the “agent” (whatever that means) is not running in a sandbox.
I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.
They might be wrong about it, clearly, but it doesn’t mean they explicitly gave permission.
Edit: they say it in the video, ~7min in, they expected deletion to be scoped within the project directory.
I think the user simply had no idea what they are doing. I read their post and they say they are not a developer anyways, so I guess that explains a lot.
They said in a post: I thought about setting up a virtual machine but didnt want to bother.
I am being a bit hard on them, I assumed they knew what they were doing: Dev, QA, Test, Prod. Code review prior to production etc. But they just grabbed a tool, granted it root to their shell and ran with it.
But they them selves said it caused issues before. And looking at the posts on the antigravity page, lots of people do.
They basically started using a really crappy tool without any supervision as a noob.
He said “I didn’t know I needed a seatbelt for AI”. LIKE WHAT THE FUCK. Where have you been that you didn’t know that these tools make mistakes. You make mistakes. Everything makes mistakes.
If you go to googles antigravity page, I would quick Nope the fuck out. What a shit page.
Edit: 1 more thing: There is a post where one of the users says something along the lines of: “of course I gave the AI full access to my computer, what do I have to hide”? The level of expertise is stupid low…
Edit2: Also, when shown the screen that says “dont allow terminal commands” and also “dont allow auto excution”, they decided to turn those off. Also saying well that is tedious.
they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this, especially because of its countless engineers and the billions of dollars it has poured into AI development.
I honestly don’t understand how someone can exist on the modern Internet and hold this view of a company like Google.
How? How?
I can’t say much because of the NDA’s involved, but my wife’s company is in a project partnership with Google. She works in a very public facing aspect of the project.
When Google first came on board, she was expecting to see quality people who were locked in and knew what they were doing.
Instead she has seen terrible decision making (like “How the fuck do they still exist as company” bad decision making) and an over abundant reliance on using their name to pressure people into giving Google more than they should.
I remember when their motto was “Don’t be evil”. They are the very essence of sociopathic predatory capitalism.
Companies fill up with idiots and parasites. People who are adept at thriving in the role without actually producing value. Google is no exception.
They still exist because Google isn’t really a technology company anymore. It’s an advertising company masquerading as a technology company. Their success depends on selling more ads which is why all the failed projects don’t seem to make a difference.
Your point seems very valid to me.
I don’t even want to buy their products anymore because they constantly cancel them and remove any support.
The only ones they continue, seem to be the ones they can use for data collection .i.e. Pixels and Nests. (I shamefully own both).
It is so frustrating as a consumer. Especially when you know that you have become the product for them to sell.
Because they don’t have a clue how technology actually works. I have genuinely heard people claim that AI should run on Asimovs laws of robotics, even though not only would they not work in the real world, they don’t even work in the books. Zero common sense.
I mean, they were never designed to work, they were designed to pose interesting dilemmas for Susan Calvin and to torment Powell and Donovan (though it’s arguable that once robots get advanced enough, as in R. Daniel, for instance, they do work, as long as you don’t mind aliens being genocided galaxy-wide).
The in-world reason for the laws, though, to allay the Frankenstein complex, and to make robots safe, useful, and durable, is completely reasonable and applicable to the real world, obviously not with the three laws, but through any means that actually work.
Well, there is the minor detail that an AI in this context has zero ability to kill anyone, and that it’s not a true AI like Daneel or his pals.
Google’s search AI is awful. It gives me a wrong answer, I’d say 70% of the time.
i cAnNoT eXpReSs hOw SoRRy i Am
Mostly because the model is incapable of experiencing remorse or any other emotion or thought.
Mostly because the model is incapable
There, fixed that for you.
Kinda wrong to say “without permission”. The user can choose whether the AI can run commands on its own or ask first.
Still, REALLY BAD, but the title doesn’t need to make it worse. It’s already horrible.
A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.
I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.
And honestly, I want that level of granularity for everything.
The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filenameversus
rm * filenamewhere a single character makes the entire difference between deleting all files ending up with
filenamerather than all files in the current directory and also the file namedfilename.Of course here you will spot it because you’ve been primed for it. In a normal workflow, with pressure, then it’s totally different.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.
That implies the user understands every single code with every single parameters.
why not? you can even ask the ai if you don’t know
There’s no guarantee that it will tell you the truth. It could tell you to use Elmer’s glue to keep the cheese from falling off your pizza. The AI doesn’t “know” or “understand,” it just does as its training set informed it to. It’s just a very complex predictive text that you can give commands to.
Sounds like a catastrophic success to me
Operation failed successfully.
I’m making popcorn for the first time CoPilot is credibly accused of spending a user’s money (large new purchase or subscription) (and the first case of “nobody agreed to the terms and conditions, the AI did it”)
Reminds me of this kids show in the 2000s where some kid codes an “AI” to redeem any “free” stuff from the internet, not realising that also included buy $X and get one free and drained the companies’ account.
I would not call it a catastrophic failure. I would call it a valuable lesson.
Again?
Still?
Her?
Behold! Wisdom of the ancients!

It was already bad enough when people copied code from interwebs without understanding anything about it.
But now these companies are pushing tools that have permissions over users whole drive and users are using it like they’ve got a skill up than the rest.
This is being dumb with less steps to ruin your code, or in some case, the whole system.
Lmfao these agentic editors are like giving root access to a college undergrad who thinks he’s way smarter than he actually is on a production server. With predictably similar results.
That sounds like Big Balls from Musk’s Geek Squad.
You’re not wrong
I’d compare the Search AI more to Frito in Idiocracy. ChatGPT is like Joe.
Why the hell would anybody give an AI access to their full hard drive?
That’s their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.
I think it should always be in a sandbox. You decide what files or folders you drop in.












