• whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 hours ago

    AI can’t be responsible because it doesn’t make decisions or think like the slop propaganda anthropomorphizing it would trick you into believing. It’s a guessing machine, predicting the next tokens. Whoever made the call to pull the trigger or hand that control to an autonomous device holds the blame.

  • SpankyDoodle@eviltoast.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 hours ago

    Palantir is evil af. We are getting very close to our extinction. Only if people could just chill out and not need to control and kill everyone and rape and murder, we’d be so much better. Humanity is diseased.

  • Wildmimic@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    4 hours ago

    I can only recommend this article - it goes through what happened in history towards everything that happened to the targeting of that school - mainly speeding up the process from target identification to strike, and the cutting out of humans of the decisionmaking process, powered by Peter Thiel’s Palantir, and the replacement of the question if this was a war crime by the discussion of the completely unrelated Claude LLM.

    It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.

  • hcf@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 hours ago

    And yet you will never hear the 2A crowd chant the implied variant of their favorite mantra, “AI doesn’t kill people; people kill people.” 🙄

    The NDAs and TS/SCI labels are going to obfuscate the decision chain to the point that culpability won’t be able to be established at all.

  • TheEEEdiot@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 hours ago

    The wildest thing about this is using Palantir to make targeting decisions and then automating the destruction from the sky. Wasn’t this the plot of Captain America: The Winter Soldier?

    • Wildmimic@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Yes. it was. Funnily enough, of all the Marvel crap they produced, the Captain America movies really held up well, and actually were expressing that automating a kill chain is something only evil people would do. Sadly, that was very true, although the kill potential isn’t there yet.

  • ObtuseDoorFrame@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 hours ago

    … they’ve hit more schools since the first one. These monsters don’t even bother with believable lies.