We should not be using these machines until we’ve solved the hard problem of consciousness.
I see a lot of people say “It can’t think because it’s a machine”, and the only way this makes sense to Me is as a religious assertion that only flesh can have a soul.
I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.
I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.
Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.
We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.
Yeah, you’re right. Humans get really weird and precious about the concept of consciousness and assign way too much value and meaning to it. Which is ironic, because they spend most of their lives unconscious and on autopilot. They find consciousness to be an unpleasant sensation and go to efforts to avoid it.
I once built a thinking machine out of dominos. Mine added two bits together. Matt Parker’s was way bigger, and could do 8 bits. Children have made thinking machines in Minecraft out of redstone. Thinking machines aren’t very hard.
My calculator can extrapolate 5 when I give it 2, 3, and a plus sign. So can an LLM. My calculator uses some adder circuits in its ALU to get the 5. The LLM gets it from memorising the next likely token, the same way your brain works most of the time. Your brain’s a lot more advanced, though, and can find the 5 in many different ways. Likely tokens are just the most convenient. Cognitive scientists call that “System 1”, though you might know it as “fast brain”. LLMs only have system 1. They don’t have system 2, the slow brain. Your system 2 can slow down and logic out the answer. If I ask you to solve the problem in binary, like My calculator does, you probably have to use system 2.
The question you should be asking is: does system 1 experience qualia? And based on split brain studies in participants who have undergone corpus callosumectomy, I believe the answer is yes. Of course, the right brain isn’t the same thing as system 1, but what these studies demonstrate is that there are thinking parts of your brain that you can’t hear. So I’d errr on the side of caution with these system 1 machines.
We should not be using these machines until we’ve solved the hard problem of consciousness.
I see a lot of people say “It can’t think because it’s a machine”, and the only way this makes sense to Me is as a religious assertion that only flesh can have a soul.
If current LLMs are conscious then consciousness is a worthless and pathetic concept.
I actually kinda agree with this.
I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.
I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.
Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.
We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.
Yeah, you’re right. Humans get really weird and precious about the concept of consciousness and assign way too much value and meaning to it. Which is ironic, because they spend most of their lives unconscious and on autopilot. They find consciousness to be an unpleasant sensation and go to efforts to avoid it.
Spoiler alert:
spoiler
no one has souls
A soul is a wet spiderweb made out of electricity that hangs from the inside of your skull.
In theory a machine one day could think
LLMs, however, do not think. Even though the term “think” is used in chatgpt. They don’t think
I once built a thinking machine out of dominos. Mine added two bits together. Matt Parker’s was way bigger, and could do 8 bits. Children have made thinking machines in Minecraft out of redstone. Thinking machines aren’t very hard.
What do you consider thinking, and why do you consider LLMs to have this capability?
Extrapolating from information.
My calculator can extrapolate 5 when I give it 2, 3, and a plus sign. So can an LLM. My calculator uses some adder circuits in its ALU to get the 5. The LLM gets it from memorising the next likely token, the same way your brain works most of the time. Your brain’s a lot more advanced, though, and can find the 5 in many different ways. Likely tokens are just the most convenient. Cognitive scientists call that “System 1”, though you might know it as “fast brain”. LLMs only have system 1. They don’t have system 2, the slow brain. Your system 2 can slow down and logic out the answer. If I ask you to solve the problem in binary, like My calculator does, you probably have to use system 2.
The question you should be asking is: does system 1 experience qualia? And based on split brain studies in participants who have undergone corpus callosumectomy, I believe the answer is yes. Of course, the right brain isn’t the same thing as system 1, but what these studies demonstrate is that there are thinking parts of your brain that you can’t hear. So I’d errr on the side of caution with these system 1 machines.