‘It’s just parroting the training data!’ That’s supposed to be reassuring??

  • Grail@multiverse.soulism.net
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    2 days ago

    We should not be using these machines until we’ve solved the hard problem of consciousness.

    I see a lot of people say “It can’t think because it’s a machine”, and the only way this makes sense to Me is as a religious assertion that only flesh can have a soul.

      • Andy@slrpnk.net
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        2 days ago

        I actually kinda agree with this.

        I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.

        I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.

        Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.

        We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.

      • Grail@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Yeah, you’re right. Humans get really weird and precious about the concept of consciousness and assign way too much value and meaning to it. Which is ironic, because they spend most of their lives unconscious and on autopilot. They find consciousness to be an unpleasant sensation and go to efforts to avoid it.

    • MintyAnt@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      In theory a machine one day could think

      LLMs, however, do not think. Even though the term “think” is used in chatgpt. They don’t think

          • Grail@multiverse.soulism.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Extrapolating from information.

            My calculator can extrapolate 5 when I give it 2, 3, and a plus sign. So can an LLM. My calculator uses some adder circuits in its ALU to get the 5. The LLM gets it from memorising the next likely token, the same way your brain works most of the time. Your brain’s a lot more advanced, though, and can find the 5 in many different ways. Likely tokens are just the most convenient. Cognitive scientists call that “System 1”, though you might know it as “fast brain”. LLMs only have system 1. They don’t have system 2, the slow brain. Your system 2 can slow down and logic out the answer. If I ask you to solve the problem in binary, like My calculator does, you probably have to use system 2.

            The question you should be asking is: does system 1 experience qualia? And based on split brain studies in participants who have undergone corpus callosumectomy, I believe the answer is yes. Of course, the right brain isn’t the same thing as system 1, but what these studies demonstrate is that there are thinking parts of your brain that you can’t hear. So I’d errr on the side of caution with these system 1 machines.