• Naich@lemmings.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    2 months ago

    Not just a problem for open source, surely? The answer is to use AI to scan contributions for suspicious patterns, no?

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      2 months ago

      And then when those AI also have issues do we use the AI to check the AI for the AI?

    • byzxor@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      there’s already a whole swathe of static analysis tools that are used for these purposes (e.g. Sonarqube, GH code scanning). of course their viability and costs affect who can and does utilise them. whether or not they utilise LLMs I do not know (but I’m guessing probably yes).