“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were “sobering.”

“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    2 hours ago

    Do we need to remind people that LLMs don’t actually have a brain, and really, really shouldn’t be in charge of anything with real life implications?

    They aren’t actually doing a cost-benefit analysis on the use of Nuclear weapons. They’re not weighing up the cost of winning vs. the casualties. They’re literally not made for that.

    They are trained to know words, and how those words link in with other words. They’re essentially like kids doing escalation of imaginary weapons, and to them nuclear bombs are just a weapon particularly associated with being strong and deadly.

    • cRazi_man@europe.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      Yes, you do need to teach people all of that. Tech bros have sold LLMs as if they are AGI…and people have eaten this up.

      The general population is literally ignorant of the fact that these word guessing machines do not have human values or cognitive skills.

    • A_norny_mousse@piefed.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      60 minutes ago

      Do we need to remind people that LLMs don’t actually have a brain, and really, really shouldn’t be in charge of anything with real life implications?

      Yes, we do

  • dfyx@lemmy.helios42.de
    link
    fedilink
    English
    arrow-up
    34
    ·
    3 hours ago

    Yeah, we figured that one out back in… checks notes 1983. There is a reason why WarGames still holds up as an amazing movie even though the technology it depicts is far outdated.

  • Anarki_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    Text predicition machine trained on violent, stupid, and reactionary datasets acts violent, stupid, and reactionary.

    Fixed your headline.

  • rayyy@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    You know the orange felon/pedophile absolutely loves AI from the amount of AI images he posts…so.

  • evenglow@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons.

    Tactical nuclear weapons are designed for use on the battlefield with lower explosive yields and shorter ranges, while strategic nuclear weapons are intended to target enemy infrastructure from a distance, typically with much higher yields. The key difference lies in their purpose: tactical nukes support immediate military objectives, whereas strategic nukes aim to weaken an enemy’s overall war capability.

    • B-TR3E@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 hours ago

      All fine then. Next time I’ll vote for an AI. At least they know how to use nuclear weapons correctly.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    It all makes sense if we remember that the garden variety AI we have today (ChatGPT, etc) are nothing more than fancy models that predict which words typically appear one after the other in books and reddit posts.

    • dfyx@lemmy.helios42.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      I would trust Skynet a lot more than an LLM. At least that would be purpose-built for actually calculating likely outcomes.

      As @[email protected] said, this experiment didn’t contain any proper reasoning about costs and benefits of using nuclear weapons. It’s just a few glorified autocomplete scripts playing “which word comes next?” over and over again. And in the context of modern warfare, many texts in the training corpus happen to mention nukes so they’re bound to show up at the list of most likely next words eventually.

      • Bazell@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I know, but still it will be very dumb to give any AI access to weapons of mass destruction.

        • dfyx@lemmy.helios42.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          I would argue it’s very dumb to give anyone, including humans, access to weapons of mass destruction.