• BladeFederation@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 hours ago

    I don’t code so correct me if I’m wrong, but wouldn’t the code have to be generally accepted, reviewed, and verified by other members of the project? Ai can fuck right off as far as I’m concerned, but this isn’t a situation where a CEO just unilaterally decides vibe coding is the move. Unless I’m mistaken.

  • iByteABit@lemmy.ml
    link
    fedilink
    arrow-up
    38
    arrow-down
    4
    ·
    8 hours ago

    I’m no fan of AI generally, but “AI Vulnerable” as a term just doesn’t make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.

    PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that’s not due to some unique danger that only AI code has and human contributors don’t.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      25
      arrow-down
      2
      ·
      edit-2
      8 hours ago

      Code reviewing should be filtering out bad code whether it originates from an AI or a human.

      But studies are showing it doesn’t work.

      A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.

      A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.

      And yes, plenty of human coders fall into the second bracket, as well.

      But AI is very good at writing code that looks right. Code review is a good and necessary tool, but the data tells us code review isn’t solving the problem of bugs introduced by AI generated code.

      I don’t have an answer, but “just use code review” probably isn’t it. In my opinion, “never use AI code assist” also isn’t the answer. There’s just more to learn about it, and we should proceed with drastically more caution.

      • iByteABit@lemmy.ml
        link
        fedilink
        arrow-up
        9
        ·
        7 hours ago

        A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.

        That’s still on the human that opened the PR without doing the slightest effort of testing the AI changes though.

        I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.

    • underisk@lemmy.ml
      link
      fedilink
      arrow-up
      22
      ·
      7 hours ago

      cpython is the reference implementation of the python interpreter. The person who took this screenshot has the Claude user on GitHub blocked so that whenever it contributed to a git repo you see this warning. The Claude user is an AI agent. AI code is garbage.

  • perry@lemy.lol
    link
    fedilink
    arrow-up
    20
    ·
    11 hours ago

    why is cpython on github? I thought they had their own forge like GNOME and KDE

    • indepndnt@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      They moved to GitHub a few years ago, mostly for the benefits of issue tracking, which previously was not integrated in the forge IIRC.

  • yucandu@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    8 hours ago

    What is “AI vulnerable”? What is the problem here? Claude isn’t reverse-Midas, it’s not like everything they touch turns to shit.

      • yucandu@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        41 minutes ago

        Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.

        I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.

    • HiddenLayer555@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 hours ago

      Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.

      • yucandu@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        39 minutes ago

        I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There’s no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.

    • plantsmakemehappy@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      8 hours ago

      If you block Claude, or any user really, and then visit a repo they’ve contributed to you will see this message.

      Maybe Claude didn’t open the PR but contributed commits.

      • Oinks@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        I tried a git log --grep=claude but it doesn’t net much, basically just this PR (which in fairness does look vibecoded).

        Maybe there’s some development branch in the repository that has a commit authored by Claude but if so it’s not on main.

      • 4am@lemmy.zip
        link
        fedilink
        arrow-up
        11
        ·
        9 hours ago

        The problem is they get overwhelmed with these PRs. Godot has been talking about not being able to manage the workload lately, people just task AIs to vibecode fixes to perceived bugs and half of them don’t even do what they were prompted to do.

        You can block those users but they just make new accounts

        It honestly feels like a DDoS on do it yourself computing, by corporations who want total control over our thoughts.

        • fubbernuckin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          I can’t wait for the money to dry up. It’s insane to me just how stupid people have been, trusting LLMs with anything whatsoever. These things cost so much money to run and they seem to fucking hypnotize investors into burning their money. Sooner or later the fact that they’re not making money has to catch up with them, right?

        • illusionist@lemmy.zip
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          8 hours ago

          Thank for the explanation! The user in the image is claude itself, not a random anonymuous user. I see the problem of the ddos with issues, tickets etc. that is a real problem! But I don’t get the rigid denial of generative ai. As long as I review the code it generates, it can save me lots of time. I would hate the actions you described as well but the image depicts nothing fishy. Am I wrong about this?

      • ParlimentOfDoom@piefed.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        And maybe the janitor should sift through that river of diarrhea for the couple of pennies someone might have swallowed.

  • Silver Needle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    11 hours ago

    I mean it’s Python. This is what we get for having been overly reliant on it.

    All kidding aside, I am a more than a bit confused by this.

  • goatbeard@beehaw.org
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    5 hours ago

    CC is actually really good if you know what you’re doing. The only issue imo would be PR spamming

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    9
    ·
    8 hours ago

    How hard can it be to have an AI take PR’s from other AI’s and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of ‘holes’ in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.

    The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme…