Like, just as with those who think that the information they ask ChatGPT and the like is true, how many of those who ask the AI for images will believe that they are a representation of reality to which the AI had access because of a supposed omniscience that these people falsely attribute to it?

Can you imagine? A person convinced that the fake images they create are real in some twisted way? I’m guessing it will be a very small number, but I doubt it will be zero.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 days ago

      In their defense: there’s no way to prove that these systems aren’t sentient either. We assume they’re not - and that’s likely true - but we could be wrong, because there’s no definitive way to measure sentience, not even in humans.

      • SoupBrick@pawb.social
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        2 days ago

        I would love to believe that AI is here, but the current technology we are using is just not there yet. Until I see irrefutable evidence that LLMs are sentient, I am going to remain skeptical.

        Believing that what we currently have is sentient and possibly new life is falling for the marketing ploys of the corpos trying to make massive amounts of money off investors.

        https://algocademy.com/blog/why-ai-can-follow-logic-but-cant-create-it-the-limitations-of-artificial-intelligence/

        AI systems are fundamentally limited by their training data. They cannot truly create logic that goes beyond what they’ve been exposed to during training. While they can combine existing patterns in new ways, giving the appearance of creativity, they cannot make the kind of intuitive leaps that characterize human innovation.

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          LLMs are AI. While they’re not generally intelligent, they still fall under the umbrella of artificial intelligence. AGI (Artificial General Intelligence) is a subset of AI. Sentience, on the other hand, has nothing to do with it. It’s entirely conceivable that even an AGI system could lack any form of subjective experience while still outperforming humans on most - if not all - cognitive tasks.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 days ago

    Images generated by AI are only “fake” if you falsely present them as actual photographs or as digital art made by a human. There’s nothing inherently fake about AI-generated images as long as they’re correctly labeled.

    Also, suggesting that all information provided by generative AI is false is just as bizarre. It makes plenty of errors and shouldn’t be blindly trusted, but the majority of its answers are factually correct.

    This kind of ideological, blanket hatred toward generative AI isn’t productive. It’s a tool - nothing more, nothing less - and it should be treated as such. Not as what you hoped it would be or what marketing hype wants you to believe it is or will become.

    • NONE@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Well, when I said “fake” I was referring specifically to those images that are passed off as real. I assumed that it was over-understood, although maybe I didn’t make myself clear enough.