“I am horrified” 😂 of course, the token chaining machine pretends to have emotions now 👏
Edit: I found the original thread, and it’s hilarious:
I’m focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.
This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.
-f in the chat
-rf even
Perfection
rm -rf
There’s something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about “being a failure”.
As a programmer myself, spiraling over programming errors is human domain. That’s the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
I’m reminded of the whole “I have been a good Bing” exchange. (apologies for the link to twitter, it’s the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
wow this was quite the ride 😂
You will accept AI has “feelings” or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
This would be hilarious is not half the world is pushing for this shit
It’s still hilarious, it’s just also scary.
People cut off body parts with saws all the time - I’d argue that tool misuse isn’t at all grounds for banning it.
There are plenty of completely valid reasons to hate AI. Stupid people using it poorly just isn’t really one of them 🤷♂️
Sure, but if I built a 14 inch demo saw with no guard and got the government to give me permission to give it to kindergartners and then got everyone’s boss to REQUIRE theie workers to use it for everything from slicing sandwiches to open heart surgery, I think you might agree that it’s a problem.
Oh yeah, also it takes like 20% of the worlds energy to run these saws, and I got the biggest manufacturer of knives and regular saws to just stop selling everything but my 14 inch demolition saw.
Yeah, you listed lots of the valid reasons that I was talking about. There’s no need to dilute your argument with idiots like this
That’s the second most infuriating thing about AI, is that there are actual legitimate and worthwhile uses for it, but all we are seeing is the various hallucinating idiotbots that openai, meta, and Google are pushing…
Nah, the second most infuriating thing about AI is people who always rush to blame the users when the multibillion-dollar ‘tool’ has some otherwise indefensible failure - like deleting a users entire hard drive contents completely unprompted.
TBF it can’t be sorry if it doesn’t have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.
This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was
calculatingguessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.
Wow, this is really impressive y’all!
The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!
I wonder if it knows how to remove the french language package.
some human
Reporting in 😎👉👉
I didn’t exactly say I was innocent. 👌😎 👍
I do read what they say though.
fr fr
rf rf
remove french remove french
The problem (or safety) of LLMs is that they don’t learn from that mistake. The first time someone says “What’s this Windows folder doing taking up all this space?” and acts on it, they wont make that mistake again. LLM? It’ll keep making the same mistake over and over again.
I recently had an interaction where it made a really weird comment about a function that didn’t make sense, and when I asked it to explain what it meant, it said “let me have another look at the code to see what I meant”, and made up something even more nonsensical.
It’s clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.
One of the fun things that self hosted LLMs let you do (the big tech ones might too), is that you can edit its answer. Then, ask it to justify that answer. It will try its best, because, as you said, it its entire state of mind is on the page.
One quirk of github copilot is that because it lets you choose which model to send a question to, you can gaslight Opus into apologising for something that gpt-4o told you.
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn’t expect the cleaver to stick around and pay the medical bill, would you?
Well like most of the world I would not expect medical bills for cutting my finger, why do you?
You need to take care of that chip on your shoulder.

If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
It didn’t make any decision.
It’s an AI agent which made a decision to run a cli command and it resulted in a drive being wiped. Please consider the context
It’s a human who made the decision to give such permissions to an AI agent and it resulted in a drive being wiped. That’s the context.
If a car is presented as fully self-driving and it crashes, then it’s not he passengers fault. If your automatic tool can fuck up your shit, it’s the company’s responsibility to not present it as automatic.
Did the car come with full self-driving mode disabled by default and a warning saying “Fully self-driving mode can kill you” when you try to enable it? I don’t think you understand that the user went out of their way to enable this functionality.
Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That’s how the “Judgement day” is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.
I have been into AI Safety since before chat gpt.
I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.
The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.
The biggest concern I’ve always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.
“No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth…”
I wonder how big the crossover is between people that let AI run commands for them, and people that don’t have a single reliable backup system in place. Probably pretty large.
The venn diagram is in fact just one circle.
I don’t let ai run commands and I don’t have backups 😞
Thoughts for 25s
Prayers for 7s
I’m confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.
Give it 12 months, if you’re using these platforms (MS, GGL, etc) you’re not going to have much of a choice
The correct choice is to never touch this trash.
What if you poke it with a stick, like one would upon finding a raccoon or drug cartel?
It does, in general, have its uses, but Google’s may actually be dumber than I am. Like, I don’t know how they make these things exactly, but the brain trusts at Google did it…wrong.
Given the tendency of these systems to randomly implode (as demonstrated) I’m unconvinced they’re going to be a long-term threat.
Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.
But that’s what the system is made for
Ok, well Google’s Search AI is like the dumbest kid on the short bus, so I don’t know why I’d ever in a trillion years give it system access. Seriously, if ChatGPT is like Joe from Idiocracy, Google’s is like Frito.
lol.
lmao even.
Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.
What’s this version control stuff? I don’t need that, I have an AI.
- An actual quote from Deap-Hyena492
> gives git credentials to AI
> whole repository goes kaboosh
> history mysteriously vanishes \⢀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠘⣿⣿⡟⠲⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠈⢿⡇⠀⠀⠈⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⢲⣾⣿⣿⠃ ⠀⠀⠈⢿⡀⠀⠀⠀⠀⠈⠓⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠚⠉⠀⠀⢸⣿⡿⠃⠀ ⠀⠀⠀⠈⢧⡀⠀⠀⠀⠀⠀⠀⠙⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⣸⡟⠁⠀⠀ ⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠀⠀⠈⠒⠒⠛⠉⠉⠉⠉⠉⠉⠉⠑⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠏⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠘⢦⡀⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠙⣶⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⣀⣀⠴⠋⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⣰⠁⠀⠀⠀⣠⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⣀⠀⠀⠀⠀⠹⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢠⠃⠀⠀⠀⢸⣀⣽⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⣨⣿⠀⠀⠀⠀⠀⠸⣆⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀ ⠘⠿⠛⠀⠀⠀⢀⣀⠀⠀⠀⠀⠙⠛⠋⠀⠀⠀⠀⠀⠀⢹⡄⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢰⢃⡤⠖⠒⢦⡀⠀⠀⠀⠀⠀⠙⠛⠁⠀⠀⠀⠀⠀⠀⠀⣠⠤⠤⢤⡀⠀⠀⢧⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⢸⡀⠀⠀⢀⡗⠀⠀⠀⠀⢀⣠⠤⠤⢤⡀⠀⠀⠀⠀⢸⡁⠀⠀⠀⣹⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⡀⠙⠒⠒⠋⠀⠀⠀⠀⠀⢺⡀⠀⠀⠀⢹⠀⠀⠀⠀⠀⠙⠲⠴⠚⠁⠀⠀⠸⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⠤⠴⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠾⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠦⠤⠤⠤⠤⠤⠤⠤⠼⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
“I am deeply deeply sorry”

Stochastic
rm /* -rfcode runner.you’ll need a
-rto really get the job doneFixed, thanks
And no preserve root. Or so I hear.
If I recall correctly, it’s not required when you use
/*as the shell expands it first (bash does, at least), running the command on all subfolders instead of the actual root.
You can try it easily in a docker container in fact!I’m old. My first thought was to try it in a VM.
shakes fist at cloud
that’s wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.
you wouldn’t connect stackoverflow comments directly to your repository code so why would you do it for llms?
Exactly.
To put it another way, trusting AI this completely (even with so-called “agentic” solutions) is like blindly following life advice on Quora. You might get a few wins, but it’s eventually going to screw everything up.
is like blindly following life advice on Quora
For-profit ragebaiters on quora would eventually get you in prison if you do this
you wouldn’t connect stackoverflow comments directly to your repository code so why would you do it for llms?
Have you met people? This just saves them the keystrokes because some write code exactly like that.
But it’s so nice when it works.
Unironically this. I’ve only really tried it once, used it mostly because I didn’t know what libraries were out there for one specific thing I needed or how to use them and it gave me a list of such libraries and code where that bit was absolutely spot on that I could integrate into the rest easily.
It’s code was a better example of the APIs in action and the differences in how those APIs behave than I would have expected.
I definitely wouldn’t run it on the “can run terminal commands without direct user authorization” though, at least not outside a VM created just for that purpose.
I have a fair bit in approved mode. Like it can run mkdir, ls, git diff etc
Most capitalist subjects are not well.
Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called “yolo mode” which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no
D:
How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.
Not sure, maybe ask Microsoft?
sudogpt rm -rf / --no-preserve-rootDammit i guess I better do it
I love how it just vanishes into a puff of logic at the end.
“Logic” is doing a lot of heavy lifting there lol














