• CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    4
    ·
    13 hours ago

    ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      10 hours ago

      AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

      With that said, all of the fix recommendations should be fixed by hand.

      • _hovi_@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        8 hours ago

        Yeah I would add also ignoring how the training data is usually sourced. I agree AI can be useful but it just feels so unethical that I find it hard to justify.

        I’m a big LLM hater atm but once we’re using models that are efficient, local and trained on ethically sourced data I think I could finally feel more comfortable with it all. Can’t be writing code for me though - why would I want the bot to do the fun part?

        • shirasho@feddit.online
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 hours ago

          Exactly my thought. I got into software development because designing and writing good code is fun. It is almost a game to see how well you can optimize it while keeping it maintainable. Why would I let something else do that for me? I am a software engineer, not a prompt writer.