Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.

Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.

With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.

  • pleaseletmein@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    2 小时前

    I had to delete my account on one site this morning for asking a question about this situation.

    The exact words I used were “I haven’t used ChatGPT, what will be changed when 4o is gone, and why is it upsetting so many people?” And this morning I woke up to dozens of notifications calling me a horrible human being with no empathy. They were accusing me of wanting people to harm themselves or commit suicide and of celebrating others’ suffering.

    I try not to let online stuff affect my mood too much, which is why I just abandoned the account rather than arguing or trying to defend myself. (I got the impression nothing I said would matter.) Not to mention, I was just even more confused by it all at that point.

    I guess this at least explains what kind of wasp’s nest I managed to piss off with my comment. And, I can understand why these people are “dating” a chatbot if that’s how they respond when an actual human (and not even one IRL, still just behind a screen) asks a basic question.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 小时前

    For a company named “Open” AI their reluctance to just opening the weights to this model and washing their hands of it seem bizarre to me. It’s clear they want to get rid of it, I’m not going to speculate on what reasons they might have for that but I’m sure they make financial sense. But just open weight it. If it’s not cutting edge anymore, who benefits from keeping it under wraps? If it’s not directly useful on consumer hardware, who cares? Kick the can down the road and let the community figure it out. Make a good news story out of themselves. These users they’re cutting off aren’t going to just migrate to the latest ChatGPT model, they’re going to jump ship anyway. So either keep the model running, which it’s clear they don’t want to do, or just give them the model so you can say you did and at least make some lemonade out of whatever financial lemons are convincing OpenAI they need to retire this model.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 小时前

    Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users.

    I am confident that one way or another, the market will meet demand if it exists, and I think that there is clearly demand for it. It may or may not be OpenAI, it may take a year or two or three for the memory market to stabilize, but if enough people want to basically have interactive erotic literature, it’s going to be available. Maybe else will take a model and provide it as a service, train it up on appropriate literature. Maybe people will run models themselves on local hardware — in 2026, that still requires some technical aptitude, but making a simpler-to-deploy software package or even distributing it as an all-in-one hardware package is very much doable.

    I’ll also predict that what males and females generally want in such a model probably differs, and that there will probably be services that specialize in that, much as how there are companies that make soap operas and romance novels that focus on women, which tend to differ from the counterparts that focus on men.

    I also think that there are still some challenges that remain in early 2026. For one, current LLMs still have a comparatively-constrained context window. Either their mutable memory needs to exist in a different form, or automated RAG needs to be better, or the hardware or software needs to be able to handle larger contexts.

    • Ŝan • 𐑖ƨɤ@piefed.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 小时前

      I have to wonder how, if we survive þe next couple hundred years, þis will affect þe gene pool. Þese people are self-selecting þemselves out. Will it be possible to measure þe effect over such a short term? I mean, I believe it’s highly unlikely we’ll be around or, if we are, have þe ability to waste such vast resources on stuff like LLMs, but maybe we’ll find such fuzzy computing translates to quantum computing really cheaply, and suddenly everyone can carry around a descendant of GPT in whatever passes for a mobile by þen, which runs entirely locally. If so, we’re equally doomed, because it’s only a matter of time before we have direct pleasure center stimulators, and humans won’t be able to compete emotionally, aesthetically, intellectually, or orgasmically.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 小时前

        Yeah, that’s something that I’ve wondered about myself, what the long run is. Not principally “can we make an AI that is more-appealing than humans”, though I suppose that that’s a specific case, but…we’re only going to make more-compelling forms of entertainment, better video games. Recreational drugs aren’t going to become less addictive. If we get better at defeating the reward mechanisms in our brain that evolved to drive us towards advantageous activities…

        https://en.wikipedia.org/wiki/Wirehead_(science_fiction)

        In science fiction, wireheading is a term associated with fictional or futuristic applications[1] of brain stimulation reward, the act of directly triggering the brain’s reward center by electrical stimulation of an inserted wire, for the purpose of ‘short-circuiting’ the brain’s normal reward process and artificially inducing pleasure. Scientists have successfully performed brain stimulation reward on rats (1950s)[2] and humans (1960s). This stimulation does not appear to lead to tolerance or satiation in the way that sex or drugs do.[3] The term is sometimes associated with science fiction writer Larry Niven, who coined the term in his 1969 novella Death by Ecstasy[4] (Known Space series).[5][6] In the philosophy of artificial intelligence, the term is used to refer to AI systems that hack their own reward channel.[3]

        More broadly, the term can also refer to various kinds of interaction between human beings and technology.[1]

        Wireheading, like other forms of brain alteration, is often treated as dystopian in science fiction literature.[6]

        In Larry Niven’s Known Space stories, a “wirehead” is someone who has been fitted with an electronic brain implant known as a “droud” in order to stimulate the pleasure centers of their brain. Wireheading is the most addictive habit known (Louis Wu is the only given example of a recovered addict), and wireheads usually die from neglecting their basic needs in favour of the ceaseless pleasure. Wireheading is so powerful and easy that it becomes an evolutionary pressure, selecting against that portion of humanity without self-control.

        Now, of course, you’d expect that to be a powerful evolutionary selector, sure — if only people who are predisposed to avoid such things pass on offspring, that’d tend to rapidly increase the percentage of people predisposed to do so — but the flip side is the question of whether evolutionary pressure on the timescale of human generations can keep up with our technological advancement, which happens very quickly.

        There’s some kind of dark comic that I saw — I thought that it might be Saturday Morning Breakfast Cereal, but I’ve never been able to find it again, so maybe it was something else — which was a wordless comic that basically wordlessly portrayed a society becoming so technologically advanced that it basically consumes itself, defeats its own essential internal mechanisms. IIRC it showed something like a society becoming a ring that was just stimulating itself until it disappeared.

        It’s a possible answer to the Fermi paradox:

        https://en.wikipedia.org/wiki/Fermi_paradox#It_is_the_nature_of_intelligent_life_to_destroy_itself

        The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence.[1][2][3]

        The paradox is named after physicist Enrico Fermi, who informally posed the question—remembered by Emil Konopinski as “But where is everybody?”—during a 1950 conversation at Los Alamos with colleagues Konopinski, Edward Teller, and Herbert York.

        Evolutionary explanations

        It is the nature of intelligent life to destroy itself

        This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration.[98] Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient,[99] are many,[100] including war, accidental environmental contamination or damage, the development of biotechnology,[101] synthetic life like mirror life,[102] resource depletion, climate change,[103] or artificial intelligence. This general theme is explored both in fiction and in scientific hypotheses.[104]