• FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    This reminds me of the people who trained neural networks on stuff before ChatGPT and uploaded YouTube videos with titles like, “I FORCED an AI to read ALL of twilight, and THIS is what it wrote!” and then they laugh at the garbage that comes out of the model. Like… yeah, the model is not good at this task that it was not designed to do. Some of the text is funny, but in the same way people don’t really emotionally respond to AI art because there was no human intent behind it, I don’t respond to AI “humour”. It’s using a tool wrong and then laughing that the outcome is bad.

    There’s satirical comedy to be had here, but it needs to be grounded in what actual people are doing. Personally I haven’t seen anyone seriously expect a language model to be able to assemble a functioning PCB, so I can’t enjoy this as satire either.

    Or could it be genuine curiosity, just seeing what happens? Always possible but such a predictable outcome doesn’t tickle my curiosity either.

    So, there are all the reasons I didn’t find this interesting. Why did I reply it all? I dunno man.

    • zaphod@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      ·
      5 hours ago

      This reminds me of the people who trained neural networks on stuff before ChatGPT

      I did this with Buzzfeed clickbait headlines, not using neural networks, just simple Markov chains.

      • FishFace@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Yeah sounds about right! And that’s true a lot of it was with Markov models. I think some was NNs though.