It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.
The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.
Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.
A lot of GIGO comments here, from I assume AI supporters.
Possibly true, but misses the point: AI is fundamentally untrustworthy, and billions of dollars are being spent making them, and saying they’re ready for anything you throw at them. Safeguards built into many of these AI agents are trivially bypassed and routinely just ignored by the agents. You can get some them to ignore safeguards by simply asking the same question repeatedly.
When I type “ls” I’m pretty fucking sure I’m not going to get “rm” style results. AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.
Sometikes you can get it to ignore safeguards bybtelling it “its ok, its just testing” or “Its ok, I am doing resesrch.”
A junior developer is fundamentally untrustworthy. That’s why you don’t give them access to the fucking prod database and backups.
AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.
We don’t know what the prompt and past input was. Maybe it wasn’t as “random” as you make it out to be. A company stupid enough to let LLMs touch their prod database is going to include a bunch of other stupid inputs.
You’re approaching this from the perspective of “all LLMs are bad so don’t use them”, which is its own version of unethical and immoral. A company that isn’t using LLMs is like a company not using the Internet.
LLMs are useful, everybody should use them to some capacity, and understanding a technology is far far better than spouting off ignorant bullshit like this.
Do yourself a favor: download a free model on HuggingFace, learn how they work, experiment with the technology on your own video card. It doesn’t have to be some super-powered video card. You can get models that fit in a 8GB card just fine.
LLMs are more like vr goggles with the force of the entire plutocracy pumping up the bubble. What is the value proposition for “intelligence” which can’t reason nor possibly determine fact from falsehood? When consumers start to pay what it actually costs to run these things, is it possible to profit? What are they good at other than confidence schemes?
Don’t get your tech reporting from The Guardian. This headline is so stupid. They can’t help but anthropomorphize LLMs, because they just don’t known any better.
Same vibes as “my calculator has a tiny mathematician trapped inside.”
Or “there’s an artist inside of my printer who turns numbers into pictures.”
“you took a photo of me and trapped my soul in the image!”
Though your calculator can be trusted to actually do its job accurately.
https://youtu.be/_XJbwN6EZ4I?t=1074 (skip to 17:54 if the time jump doesn’t work)
If only that were the case…
Well shit, that’s a good point.
Not even that. Calculators have their own limitations related to rounding errors and big numbers. Their results may be deterministic but they are not always accurate.
This right here. Just about everything in here is awful, and implies decision making and thought processes that straight up do not and have never existed in any AI model whatsoever.
What happened was they threw an awfully-scoped statistics model at problems the program couldn’t possibly generate good outputs for, and surprise surprise, it generated bad outputs. The part that’s of interest is just how bad the output was, and even then, only in a schadenfreude-filled “it was bound to happen eventually” manner.
It didn’t confess it just outputted more plausible garbage based on inputs.
Can I just anthropomorphise a little bit and call them psychotic?
Why in the everliving fuck would you give software delete access to your live backups? Like, in what scenario is this a solution?
The trend seems to be to give an AI agent access to the same command line and credentials a person would use, with no sandboxing, because then it can do the same tasks in a similar way and “just works”. Obviously this is insane, and not even attempting building a comprehensive sandboxing system to deploy an AI agent into invites disaster, but you can see why certain people would be tempted, because that would take a lot of work and thought and probably need a human in the loop in the end anyway.
Even a person should not be able to delete critical backups without jumping through a couple of hoops.
And critical backups should be passed into an air gapped vault with a little guard piggy.
it’s the kind of thing that should literally require 3 people turning physical keys at the same location
When you believe AI can do anything, you don’t worry about what sorts of access it’ll break things with. When you rely on AI to do work, you’re too interested in half-assing your job to consider what might go wrong. When capitalism never promotes people for their skill, understanding or caution, the former two issues proliferate.
Voilà, disaster.
That is their disaster recovery plan “ask Claude”
A backup 3 months old off-site. That doesn’t sound like a very recent backup 🌝
that raises a philosophical question, at what point does a backup become an archive?
When it cannot be restored from I am thinking?
Giving free access to a tool you can’t rely on, over a system you must rely on. What could go wrong? /s
Plus come on, even my personal files get a monthly backup, and I’m damn sloppy*.
Ah, and like others said: Claude didn’t “confess” anything. A confession is an acknowledgement of something you’ve done but you’d rather avoid others knowing, good luck claiming a bot has a mental model of people like we do.
*currently using a single off-site backup, a USB stick. This will change in a few days, as my new hard disk pops up; the old one will be used for, among other things, backup of important files. Then I’ll get a bona fide 3-2-1.
Lol.
Lmao, even.
Got it, claude is a brat
Good. Zero sympathy for these people.
No the culprit was not the AI. It was the lack of understanding what it can and what it can not do. And blaming something like this on a large language model is plain incompetence














