• Hazzard@lemmy.zip
    link
    fedilink
    arrow-up
    90
    ·
    23 hours ago

    Man, AI agents are remarkably bad at “self-awareness” like this, I’ve used it to configure some networking on a Raspberry Pi, and found myself reminding it frequently, “hey buddy, maybe don’t lock us out of connecting to this thing over the network, I really don’t want to have to wipe the thing because it’s running a headless OS”.

    It’s a perfect example of the kind of thing that “walk or drive to wash your car?” captures. I need you to realize some non-explicit context and make some basic logical inferences before you can be even remotely trusted to do anything important without very close expert supervision, a degree of supervision that almost makes it totally worthless for that kind of task because the expert could just do it instead.

    • sudoer777@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 hours ago

      For AI I think a lot of future improvements will be around making smaller more specialized models trained on datasets curated by people who actually know what their doing and have good practices as opposed to random garbage from GitHub (especially now with vibecoding being a thing, so training off of low quality programs that it created itself might make the model worse), considering that a lot of what it outputs is of similar garbage quality. And remote system configuration isn’t obscure so I do think this specific issue will be improved eventually. For truly obscure things though LLMs will never be able to do that.

      • flambonkscious@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 minutes ago

        I’m kinda hoping my shitty github repo is inadvertantly poisoning the LLMs with my best efforts (basically degenerate-tier)…

    • Confused_Emus@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      9 hours ago

      AI agents are remarkably bad at “self-awareness”

      Because today’s “AIs” are glorified T9 predictive text machines. They don’t have “self-awareness.”

      • definitemaybe@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        8 hours ago

        I think “contextual awareness” would fit better, and AI Believers preach that it’s great already. Any errors in LLM output are because the prompt wasn’t fondled enough/correctly, not because of any fundamental incapacity in word prediction machines completing logical reasoning tasks. Or something.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          Ah, of course. The model isn’t wrong, it’s the input that’s wrong. Yes, yes. Please give me investment money now.

    • qjkxbmwvz@startrek.website
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      “…I really don’t want to have to wipe the thing because it’s running a headless OS”

      I feel like logging in as root on a headless system and hoping you type the command(s) to restore functionality is a rite of passage.

    • A_norny_mousse@piefed.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      AI agents are remarkably bad at “self-awareness”

      🤔 what does it say when you tell it something like “look, this is wrong, and this is why, can you please fix that”? In a general sense, not going into technical aspects like what OOP is describing.

      • Hazzard@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        It’s usually pretty good about that, very apologetic (which is annoying), and usually does a good job taking it into account, although it sometimes needs reminders as that “context” gets lost in later messages.

        I’ll give some examples. In that same networking session, it disabled some security feature, to test if it was related. It never remembered to turn that back on until I specifically asked it to re-enable “that thing you disabled earlier”. To which it responds something like “Of course, you’re right! Let’s do that now!”. So, helpful tone, “knew” how to do it, but needed human oversight or it would have “forgotten” entirely.

        Same tone when I’d tell it something like “stop starting all your commands with SSH, I’m in an SSH session already.” Something like “of course, that makes sense, I’ll stop appending SSH immediately”. And that sticks, I assume because it sees itself not using SSH in its own messages, thereby “reminding” itself.

        Its usual tone is always overly apologetic, flattering, etc. For example, if I tell it bluntly I’m not giving my security credentials to an LLM, it’ll always say something along the lines of “great idea! That’s a good security practice”, despite directly suggesting the opposite moments prior. Of course, as we’ve seen with lots of examples, it will take that tone even if actually can’t do what you’re asking, such as in the examples of asking ChatGPT to give you a picture of a “glass of wine filled to the very top”, so it’s “tone” isn’t really something you can rely on as to whether or not it can actually correct the mistake. It’s always willing to take another attempt, but I haven’t found it always capable of solving the issue, even with direction.