• WorldsDumbestMan@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    1 hour ago

    That is pretty unfair of you! People used to say Fusion is always 30 years away, and now look where it’s at-

    Oh wait, the entire world came together to produce one really shitty high maintenance 500 MW fusion reactor?

    Nwm.

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    22 hours ago

    I’m adventuring into lands I only have tinkered with, javascript and never used, webassembly. Thought I’d ask an llm for some things to get up and running, like see some code to remember how you do that js. And I was quite bewildered, it’s a bit complicated (python to wasm called from js and back), and for the like 8 conundrums I had; The LLM spit out a complete example showing how it was done! Completely flabbergasted! So I got right back into all that async js jazz. Spent some time I have to say before I figured out that not one example worked. Not one.

    🥳

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      For some things they work pretty well, for others they don’t, and they’re just as confident sounding with it all!

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I’ve heard a lot of people say that llm code can be a timesaver, but only if you’re already proficient enough with the programming language that you can see at a glance what the generated code does. Otherwise, the experience is much like your own.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        When it was useful it was for simple things, like read a file (for some reason I always forget how to do that) or convert to base64. Like something you’d find in a book.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      22 hours ago

      Remember that LLMs were trained on the internet, a place that’s full of confidently incorrect postings. Using a coding LLM helps some, but just like with everything else, it gives better results if you only ask for specific and small parts at a time. The longer the reply, the more probable it will drift into nonsense (since it’s all just probability anyway).

      I’ve gotten excellent starting code before to tweak into a working program. But as things got more complex the LLM would try to do things even that guy who gets downvoted a lot on Stack Overflow wouldn’t dare suggest.

      • Thorry@feddit.org
        link
        fedilink
        arrow-up
        8
        ·
        21 hours ago

        One of the things I’ve really had AI fanboys going crazy over is by asking them to feed their AI generated code back into the AI and ask for potential issues or mistakes. Without fail it points out very obvious issues and sometimes some less obvious ones as well. If your AI coder is so good, why does it know it fucked up?

        This is basically what these new “agent” modes do. Just keep feeding the same thing in on itself till it finds some balance. Often using an external tool, like building the project for example, to determine if it’s done. However I’ve seen this end up in loops a lot. If all of the training data contained the same mistake (or the resulting network always produces that mistake), it can’t fix it. It will just say oh I’ve made a mistake let me fix that over and over again as the same obvious error pops out.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          20 hours ago

          I’ve tried a few of the newer local ones with the visible chain of thought and thinking mode. For some things it may work, but for others it’s a hilarious train wreck of second guessing, loops, and crashes into garbage output. They call it thinking mode, but what it’s doing is trying to stack the odds to get a better probability hit on hopefully a “right” answer. LLMs are the modern ELIZA, convincing on the surface but dig too deep and you see it break. At least Infocom’s game parser would break gracefully.

  • Jake Farm@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    They aren’t delusional, they are crooks and the american people will be left holding the ball when this massive round-tripping scheme finally ends.