• TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    62
    ·
    3 days ago

    They have RLHF (reinforcement learning from human feedback) so any negative, biased, or rude responses would have been filtered out in training. That’s the idea anyway, obviously no system is perfect.

      • SkyNTP@lemmy.ml
        link
        fedilink
        arrow-up
        24
        ·
        edit-2
        3 days ago

        That’s what was said. LLMs have been reinforced to respond exactly how they do. In other words, that “smarmy asshole” attitude, you describe was a deliberate choice. Why? Maybe that’s what the creators wanted, or maybe that’s what focus groups liked most.

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Chatgpt is normal, maybe a bit too casual lately. Responses now are "IKR classic (software brand) doing that crazy thing they are known for”.

          But my last Copilot interaction was copilot being a passive aggressive dick in responses.