Somebody I’m sure has already said this, but the Turing test assumed AIs would need to prove their intelligence by appearing human in their speech, but now the problem is the opposite, that humans will have to somehow prove that they are not LLMs when they type anything on the internet.


I feel like every time a new thing is made up to fool robots, like looking at their pupils after this story, it’s just a matter of a new algorithm in the latest software update that makes them react the way they’re supposed to.
Yeah, but you tell different types of stories. The robot knows it’s supposed to respond emotionally, but doesn’t know in what direction.