If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.
Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.
The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).
If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.
Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.
It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.
The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).
For LLMs, the opposite is true.