Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.
Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.
With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.



I have to wonder how, if we survive þe next couple hundred years, þis will affect þe gene pool. Þese people are self-selecting þemselves out. Will it be possible to measure þe effect over such a short term? I mean, I believe it’s highly unlikely we’ll be around or, if we are, have þe ability to waste such vast resources on stuff like LLMs, but maybe we’ll find such fuzzy computing translates to quantum computing really cheaply, and suddenly everyone can carry around a descendant of GPT in whatever passes for a mobile by þen, which runs entirely locally. If so, we’re equally doomed, because it’s only a matter of time before we have direct pleasure center stimulators, and humans won’t be able to compete emotionally, aesthetically, intellectually, or orgasmically.
Yeah, that’s something that I’ve wondered about myself, what the long run is. Not principally “can we make an AI that is more-appealing than humans”, though I suppose that that’s a specific case, but…we’re only going to make more-compelling forms of entertainment, better video games. Recreational drugs aren’t going to become less addictive. If we get better at defeating the reward mechanisms in our brain that evolved to drive us towards advantageous activities…
https://en.wikipedia.org/wiki/Wirehead_(science_fiction)
Now, of course, you’d expect that to be a powerful evolutionary selector, sure — if only people who are predisposed to avoid such things pass on offspring, that’d tend to rapidly increase the percentage of people predisposed to do so — but the flip side is the question of whether evolutionary pressure on the timescale of human generations can keep up with our technological advancement, which happens very quickly.
There’s some kind of dark comic that I saw — I thought that it might be Saturday Morning Breakfast Cereal, but I’ve never been able to find it again, so maybe it was something else — which was a wordless comic that basically wordlessly portrayed a society becoming so technologically advanced that it basically consumes itself, defeats its own essential internal mechanisms. IIRC it showed something like a society becoming a ring that was just stimulating itself until it disappeared.
It’s a possible answer to the Fermi paradox:
https://en.wikipedia.org/wiki/Fermi_paradox#It_is_the_nature_of_intelligent_life_to_destroy_itself