There is a lot of fanfic from AI, like Claude AI, and this AI is terrible at coming up with good dialogue. For example, in a Superman & Lois fanfic, Jonathan Kent, who’s 23, gets powers through an accident and becomes a superhero in Metropolis. He’s been active for two years, and when Lois finds out she gets mad and demands answers, feeling entitled to information she doesn’t deserve.

When Jon tells her straight up, “I don’t care about you or your opinion. Leave me alone,” she refuses. And when Jon is verbally mean to her, all the characters treat him hostilely.

I’m sorry, but this is all on Lois. If someone tells you to your face they don’t want you around and you refuse to leave their apartment or leave them alone, then if they verbally abuse you, hurt your feelings, or cuss you out, you kind of deserve it.

There was another story that’s supposed to be a super grounded family drama where this cousin, who’s super far left, finds out her younger cousin is a multi-millionaire lawyer and the head of a law firm. The cousin goes into the law firm (somehow security didn’t stop her) and goes to his office to get answers, and she’s mad he didn’t tell her he’s wealthy. The lawyer eggs her on, makes fun of her, etc.

First of all, this is supposed to be a super realistic drama story, but this entire thing is unrealistic. First, it’s not her business how much money her cousin has. It doesn’t matter if he has a hundred million or a billion; it’s none of her business.

Second, there is no way in hell a random person is walking through the building and going up to his office. It doesn’t matter if she’s his cousin. If she was his wife or kid, maybe, but she’s just a cousin. Security would have aggressively escorted her out. She wouldn’t even make it to his office.

And no lawyer or businessman would give her the time of day. Most lawyers and businesspeople are very mature and professional, so they wouldn’t waste time having a political debate with some random family member. If his security is so bad that she manages to get through the front door, the lawyer would call security and she would be escorted out or arrested.

Overall, the dialogue is just so bad. Characters have such non-reactions to things, or they overreact. AI will constantly have characters say, “That’s not nothing,” when no one talks like that.

They are always melodramatic or overdramatic, or they have such a non-reaction. Characters go from 1 to 100 in the blink of an eye.

Why is AI so bad at this? The characters are so beyond stupid to the point where, even if something bad happens, you don’t feel bad for them because they’re idiots.

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    10 hours ago

    Bunch of reasons, few people have pointed out in other comments already, including:

    • text =/= dialogue
    • LLMs train on the average guy, not good actors or only well done plays
    • AI models are statistical approximations of patterns that emulate behavior of the real world. It’s gonna take shortcuts wherever it can
    • maybe there’s specific tricks for positive or systems prompts that will improve quality in your case / on you model
    • the model you are using could be bad for the task
    • some of the points you are making strike me as personal preference; they could just as well apply to non-AI written series

    Those are some things that play into why it feels like it’s awful at it. Some of it perception, some of it is true.

  • db2@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    2
    ·
    11 hours ago

    Why is AI so bad at coming up with dialogue?

    Generalizing the question is really the way to get the most useful answer here. It’s because LLMs are being pushed as “AI” by tech bros with room temperature IQs. It can kind of do a neat trick and sometimes sort of almost get it right so these people who shouldn’t be in charge of a pet rock even with supervision who for some reason have absurd amounts of money decided it should be in literally everything so they could have more money. It doesn’t matter to them if it’s ever actually the right tool for the job.

    LLMs have a use case. Several in fact. Creativity and decision making aren’t really among them except as a gimmick not to be taken seriously.

    • Iconoclast@feddit.uk
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      5 hours ago

      LLMs not being AI is one of the most common confidently strong statements I see spread here as fact pretty much every day.

      We’ve had AI systems since 1956.

      It simply refers to any system designed to perform a cognitive task that would normally require a human. For an LLM that task is generating natural language. For an Atari chess opponent it’s playing chess. They’re both still AI - narrow AI, but AI nonetheless.

      People seem to expect general intelligence from these systems, and when they don’t get it, statements like this pop up. But AGI is not synonymous with AI. It’s also an AI system, yes, but just a subcategory of AI - same as GenAI is. No company claims to have a true AGI system.

  • Paragone@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    9 hours ago

    There are a few excellent books on dialogue, for writers.

    YOU try writing good dialogue!

    Keeping-straight everything that needs to be kept-straight when writing dialogue, isn’t easy, & people who don’t even understand Shawn Coyne’s point about how every single scene in the entire story MUST pull its weight, & MUST alter emotional-value somehow ( see “The Story Grid” for that ), is something that most story-writers don’t understand properly.

    Some are brilliant at it ( & the writers for that TV-show, Mad Men, iirc, were mostly women, simply because women are better than men at writing such drama ), & most are not.

    Why should imitations-of-us be better at it than we are?

    _ /\ _

  • InvalidName2@lemmy.zip
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    11 hours ago

    Simple: People, in general, are terrible at coming up with dialog. And the machines are imitating us.

  • disregardable@lemmy.zip
    link
    fedilink
    arrow-up
    8
    ·
    11 hours ago

    It’s a random text generator. It doesn’t know what it’s spitting out any more than a calculator understands the meaning of the number it’s calculating. You put in text, it outputs texts its algorithm tells it are associated. That’s it.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    10 hours ago

    People don’t write like they talk. Taking Reddit conversations and converting them to spoken dialogue is going to sound unnatural.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    4
    ·
    10 hours ago

    Are you asking why a device with no feelings or personal life experience can’t create realistic dialogue when prompted to create something based on dialogue from comics and fanfic? Comics which barely explain context with words (since they use images)? Have you noticed how over the top, melodramatic, stylized and unrealistic dialogues are in so many comics, superhero ones in particular? Fanfic isn’t that much better. Of course there are some with good writing but the vast majority is mediocre at best and cringe at worst.

    I’m surprised the LLM could come up with something narratively cohesive to begin with

  • Jay@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    You’re trying to apply logic to a system that has no logic, only data, to go off of. Of course it’s going to be shitty at it.

  • rossman@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    11 hours ago

    The example with the millionaire, the task is to write believable dialog and context for people of two different classes. I don’t think AI has that training data.

    I believe AI can simulate a text conversation with the right training data. AI came in too early and won’t be able to get proper candid training data for a long time.