The US dictionary Merriam-Webster’s word of the year for 2025 was “slop”, which it defines as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence”. The choice underlined the fact that while AI is being widely embraced, not least by corporate bosses keen to cut payroll costs, its downsides are also becoming obvious. In 2026, a reckoning with reality for AI represents a growing economic risk.

Ed Zitron, the foul-mouthed figurehead of AI scepticism, argues pretty convincingly that, as things stand, the “unit economics” of the entire industry – the cost of servicing the requests of a single customer against the price companies are able to charge them – just don’t add up. In typically colourful language, he calls them “dogshit”.

Revenues from AI are rising rapidly as more paying clients sign up but so far not by enough to cover the wild levels of investment under way: $400bn (£297bn) in 2025, with much more forecast in the next 12 months.

Another vehement sceptic, Cory Doctorow, argues: “These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”

  • CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    2 days ago

    You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you “check” the output.

    I work as a senior dev, and I’ve tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I’ve tested it has always come back to “if I had just done the work from the beginning myself, I would have had a working result that’s cleaner and functions better in less time”.

    Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      In my experience one needs to be a senior developer with at least some experience with their own code having gone through a full project lifecycle (most importantly, including Support, Maintenance and even Expansion stages) to really, not just intellectually know but even feel in your bones, the massive importance in reducing lifetime maintenance costs of the very kind of practices in making code that LLMs (even with code reviews and fixes) don’t clone (even when cloning only “good” code they can’t do things like for example consistency, especially at the design level).

      • Inexperienced devs just count the time cost of LLM generation and think AI really speeds up coding.
      • Somewhat experienced devs count that plus code review costs and think it can sometimes make coding a bit faster
      • Very experience devs looks at the inconsistent multiple-style disconnected mess (even after code review) when all those generated snippets get integrated and add the costs of maintaining and expanding that codebase to the rest, concluding that “even in the best case in six months this shit will already have cost me more time in overall even if I refactor it, than it would cost for me doing it properly myself in the first place”.

      It’s very much the same problem with having junior developers do part of the coding, only worse because at least junior devs are consistent and hence predictable in how they fuck up so you know what to look for and once you find it you know to look for more of the same, and you can actually teach junior developers so they get better over time and especially focus on teaching them not to make the worst mistakes they make, whilst LLMs are unteachable and will never get better plus they’re mistakes are pretty much randomly distributed in the error space.

      You give coding tasks in a controlled way to junior devs whilst handling the impact of their mistakes because you’re investing in them, whilst doing the same to an LLM has an higher chance of returning high impact mistakes and yields you no such “investment” returns at all.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      can work for a few lines, for some boilerplate, or for some refactoring

      I highly doubt the person you’re replying to meant anything else. We’re all kinda on the same page here.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 days ago

        I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.