• grue@lemmy.world
    link
    fedilink
    arrow-up
    90
    ·
    edit-2
    12 hours ago

    I’m sad that the relevant xkcd is kinda obsolete now (because it’s been long enough for that research team to finish doing its thing).

    • GraniteM@lemmy.world
      link
      fedilink
      arrow-up
      29
      ·
      11 hours ago

      Google photos is alarmingly good at object and individual recognition. It’ll probably be used by the droid war killbots to distinguish “robot” from “human with bucket on head.”

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      arrow-up
      6
      arrow-down
      6
      ·
      12 hours ago

      What would be a “nearly impossible” task in this post-AI world? Short of the provably impossible tasks like the busy beaver problem (and even then, you would be able to make an algorithm that covers a subset of the problem space), I really can’t think of anything.

        • Tlaloc_Temporal@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          6 hours ago

          I think more important would be non-chaotic answers. It doesn’t matter too much if their not identical if the content is roughly the same. But if you can get significantly different answers from trivial changes in prompt wording, that really does break things.

          Still doesn’t mean it’s correct though.

        • Vigge93@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          10 hours ago

          Most AI are deterministic, it’s only a small subset of AI that are non-deterministic, and in those cases it’s often by design. Also, in many cases, the AI itself is deterministic, but we choose to use the output in a non-deterministic way, e.g. the AI gives a probability output, and will always give the same probabiliies for the same input, and instead of always choosing the one with highest probability, we choose based on the probability weight, leading to a non-deterministic output.

          Tl;Dr. Non-determinism in AI is often not an inherit property of the model, but a choice in how we use the model.

          • hemko@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            9 hours ago

            Okay, probably fair. I’ve only been working with LLMs that are extremely non-deterministic in their answers. You can ask same question 17 times and the answers have some variance.

            You can request an LLM to create an OpenTofu scripts for deploying infrastructure based on same architectural documents 17 times, and you’ll get 17 different answers. Even if some, most or all of them still manage to get the core principals right, and follow the industry best practices in details (ie. usually what we consider obvious such as enforcing TLS 1.2) that were not specified, you still have large differences in the actual code generated.

            As long as we can not trust that the output here is deterministic, we can’t truly trust that what we request from the LLM is actually what we want, thus requiring human verification.

            If we write IaC for OpenTofu or whatnot, we can somewhat trust that what we specify is what we will receive, but with the ambiguity of AI we can’t currently make sure if the AI is filling out gaps we didn’t know of. With known providers for, say, azurerm module we can always tell the defaults we did not specify.

          • sudoMakeUser@sh.itjust.works
            link
            fedilink
            arrow-up
            5
            ·
            9 hours ago

            Still going to be non-deterministic for any commercial AIs offered to us. It’s a weird technology. I had a link to an article explaining why but I can’t find it anymore.

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        7 hours ago

        I think 100% autonomous robotics and driving is still at least 5-10 years away even with large research teams working on it. I mean truly robust AI which is able to handle any situation you could throw at it with zero intervention needed.