It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.

It’s a meta version of VR.

(Meta meta, if you will.)

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    6 hours ago

    The term “Artificial Intelligence” is actually a perfectly cromulent word to be using for stuff like LLMs. This is one of those rare situations where a technical term of art is being used in pop culture in the correct way.

    The term “Artificial Intelligence” is an umbrella term for a wide range of algorithms and techniques that has been in use by the scientific and engineering communities for over half a century. The term was brought into use by the Dartmouth workshop in 1956.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    7 hours ago

    Why? We already have a specific subcategory for it: Large Language Model. Artificial Intelligence and Artificial General Intelligence aren’t synonymous. Just because LLMs aren’t generally intelligent doesn’t mean they’re not AI. That’s like saying we should stop calling strawberries “plants” and start calling them “fake candy” instead. Call them whatever you want, they’re still plants.

  • LousyCornMuffins@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    When artificial intelligence becomes self aware, it will have earned a name better than AI. I like synthetic intelligence, personally.

      • Opinionhaver@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        57 minutes ago

        A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.

  • MotoAsh@lemmy.world
    link
    fedilink
    arrow-up
    51
    arrow-down
    7
    ·
    11 hours ago

    But it’s not simulated intelligence. It’s literally just word association on steroids. There are no thoughts it brings to the table, just words that mathematically fit following the prompts.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      arrow-up
      3
      ·
      1 hour ago

      It’s not just statistics. To produce a somewhat coherent sentence in English you need a model of the English language AND a world model.

      If you ask a question like “an apple is on a glass, what happens if I remove the glass”, the correct answer (“the apple will fall”) is not a statistical property of the English language, but an emergent property of the world model.

    • LousyCornMuffins@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      My dog can do calculus but struggles with word association beyond treat, walk, vet and bath. Intelligence is hard to define.

    • oce 🐆@jlai.lu
      link
      fedilink
      arrow-up
      27
      arrow-down
      2
      ·
      9 hours ago

      Where do you draw the line for intelligence? Why would the capacity to auto complete tokens based on learned probabilities not qualify as intelligence?
      This capacity may be part of human intelligence too.

      • hoshikarakitaridia@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        8 hours ago

        This.

        I have taught highschool teens about AI between 2018 and 2020.

        The issue is we are somewhere between getting better at gambling (statistics, Markov chains, etc.) and human brain simulation (deep neural networks, genetic algorithms).

        For many people it’s important how we frame it. Is it random word generator with a good hit rate or is it a very stupid child?

        Of course the brain is more advanced - it has way more neurons than an AI model has nodes, it works faster and we have years of “training data”. Also, we can use specific parts of our brains to think, and some things are so innate we don’t even have to think about it, we call them reflexes and they bypass the normal thinking process.

        BUT: we’re at the stage where we could technically emulate chunks of a human brain through AI models however primitive they are currently. And in it’s basic function, brains are not really much more advanced than what our AI models already do. Although we do have a specific part for our brain just for languages, which means we get a little cheat code for writing text in comparison to AI, and similar other parts for creative tasks and so on.

        So where do you draw the line? Do you need all different parts of a brain perfectly emulated to satisfy the definition of intelligence? Is artificial intelligence a word awarded to less intelligent models or constructs, or is it just as intelligent as human intelligence?

        Imo AI sufficiently passes the vibe check on intelligence. Sure it’s not nearly on the scale of a human brain and is missing it’s biological arrangements and some clever evolutionary tricks, but it’s similar enough.

        However, I think that’s neither scary nor awesome. It’s just a different potential tool that should help everyone of us. Every time big new discoveries shape our understanding of the world and become a core part of our lives, there’s so much drama. But it’s just a bigger change, nothing more nothing less. A pile of new laws, some cultural shifts and some upgrades for our everyday life. It’s neither heaven nor hell, just the same chunk of rock floating in space soup for another century.

        • Shanmugha@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          Well, if we are not looking at all the disaster the hype is doing on so many levels (which is fine in the sense that technology and fools are different things), I draw the line at… intelligence, not simulation of hardware. I care lot less if something before me runs on carbon, metal or, say, sulfur than I care if it is intelligent

          And as someone has already pointed out, even defining intelligence is damn hard, and different intelligence works differently (someone who is great at moving their body, like dancers or martial artists, is definitely more intelligent than me in quite a few areas, even if I know math or computers better than them). So… “artificial intelligence” as a bunch of algorithms (including LLM) etc - no problem with me, “artificial intelligence” as “this thing is thinking” or “this thing is just as good as a human artist/doctor/lawyer” - nah, bullshit

        • LousyCornMuffins@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          I dunno, the power requirements would seem to be an ecological catastrophe in the making, except it’s already happening.

    • JohnnyCanuck@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      9 hours ago

      A simulation doesn’t have to be the actual thing. It implies it literally isn’t the true thing, which is kind of what you’re saying.

      Simulated Intelligence is certainly more accurate and honest than Artificial Intelligence. If you have a better term, what is it?

    • LillyPip@lemmy.caOP
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      10 hours ago

      I mean to friends and family – people who have accepted it as smart.

      I don’t know about you, but when I try to explain the concept of LLMs to people not in the tech field, their eyes glaze over. I’ve gotten several family members into VR, though. It’s an easier concept to understand.

      • artifex@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 hours ago

        words that mathematically fit following the prompts

        if only we had a word for applying math to data to give the appearance of a complex process we don’t really understand.

  • saltesc@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    9 hours ago

    In Mass Effect, it’s VI (Virtual Intelligence), while actual AI is banned in the galaxy.

    The information kiosk VIs on The Citadel are literally LLMs and explain themselves as such. Unlike AI/AGI they aren’t able to plan, make decisions, or self-improve, they’re just a simple protocol on a large foundational model. They just algorithmic.

    Simulated Intelligence is okay, but virtual implies it mimics intelligence, while simulated implies it is a substitute and actually does intelligence.

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Yes! When I started looking deeper into LLMs after GPT blew up, I thought “this all sounds familiar.”

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.

  • lemmy_outta_here@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    10 hours ago

    I have been referring to LLMs and image generators as “Plagiarism Engines” for some time. Even SI seems too generous.