It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

  • Floon@lemmy.ml
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    21 hours ago

    A lot of GIGO comments here, from I assume AI supporters.

    Possibly true, but misses the point: AI is fundamentally untrustworthy, and billions of dollars are being spent making them, and saying they’re ready for anything you throw at them. Safeguards built into many of these AI agents are trivially bypassed and routinely just ignored by the agents. You can get some them to ignore safeguards by simply asking the same question repeatedly.

    When I type “ls” I’m pretty fucking sure I’m not going to get “rm” style results. AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 hours ago

      Sometikes you can get it to ignore safeguards bybtelling it “its ok, its just testing” or “Its ok, I am doing resesrch.”

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      edit-2
      16 hours ago

      A junior developer is fundamentally untrustworthy. That’s why you don’t give them access to the fucking prod database and backups.

      AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.

      We don’t know what the prompt and past input was. Maybe it wasn’t as “random” as you make it out to be. A company stupid enough to let LLMs touch their prod database is going to include a bunch of other stupid inputs.

      You’re approaching this from the perspective of “all LLMs are bad so don’t use them”, which is its own version of unethical and immoral. A company that isn’t using LLMs is like a company not using the Internet.

      LLMs are useful, everybody should use them to some capacity, and understanding a technology is far far better than spouting off ignorant bullshit like this.

      Do yourself a favor: download a free model on HuggingFace, learn how they work, experiment with the technology on your own video card. It doesn’t have to be some super-powered video card. You can get models that fit in a 8GB card just fine.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          This is a technology community. LLMs are technology. If calling LLMs useful is considered glazing, then I’m not sure if you’ve eaten a proper doughnut.

          • LukeZaz@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            Beehaw, and even Lemmy more broadly, is very anti-AI. Feel free to die on the metaphorical hill if you so wish.

            Save the usefulness debate for someone else, though. If you still believe in LLMs even after all this time, then I can’t trust you haven’t fallen victim to cognitive surrender — and as such, I can’t trust you write your own posts. I’d rather spend my energy elsewhere.

      • Floon@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        6 hours ago

        Standard AI apologia. Blame users for the problems, when fundamentally it is technology completely oversold as to its capability and reliability, and burning hundreds of billions of dollars trying to get folks addicted to it, before everyone finds out the true cost of a token.

        It’s a swamp that’s going to destroy the economy, where the goal is to unemploy millions of people. No thanks.

      • Kwakigra@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        14 hours ago

        LLMs are more like vr goggles with the force of the entire plutocracy pumping up the bubble. What is the value proposition for “intelligence” which can’t reason nor possibly determine fact from falsehood? When consumers start to pay what it actually costs to run these things, is it possible to profit? What are they good at other than confidence schemes?

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          LLMs are more like vr goggles with the force of the entire plutocracy pumping up the bubble.

          The existence of a bubble doesn’t not mean the technology is useless. The internet had its own bubble 25 years ago. That doesn’t mean it was useless, just that people were investing in anything even remotely related to the Internet, including stupid websites and wasteful ideas.