They warn us that future artificial intelligence will wipe out humanity. This may be a lie with ulterior motives.Support independent research and analysis by...
An interesting criteria, why does going back to edit (instead of correcting itself mid-stream)
I suppose those would be equivalent, I just haven’t seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.
Couldn’t you perform this test on any animal with a discrete brain?
Oh, right. Animals do exist. It simply hadn’t occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it’s feasible to say some things aren’t it.
[Garden path sentences]
No good. The difference between a good garden path and simple ambiguity is that the ‘most likely’ interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don’t like to put words together in an order that they don’t usually occur, even if in the end there’s a way to interpret it to make sense.
The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.
Just for fun I’ll try to make one here:
“After dealing with the asbestos, I was asked to lead paint removal.”
Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.
“While the man hunted the deer ran into the forest”
Actually looked too good to be an original creation from an LLM to me, and sure enough it’s not. (About half way down)
I was actually looking up the one about the horse when I found that page.
I suppose those would be equivalent, I just haven’t seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.
Oh, right. Animals do exist. It simply hadn’t occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it’s feasible to say some things aren’t it.
No good. The difference between a good garden path and simple ambiguity is that the ‘most likely’ interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don’t like to put words together in an order that they don’t usually occur, even if in the end there’s a way to interpret it to make sense.
The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.
Just for fun I’ll try to make one here:
“After dealing with the asbestos, I was asked to lead paint removal.”
Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.
deleted by creator
I gotta go for now, but one quick note:
Actually looked too good to be an original creation from an LLM to me, and sure enough it’s not. (About half way down)
I was actually looking up the one about the horse when I found that page.
deleted by creator