I’m no fan of AI generally, but “AI Vulnerable” as a term just doesn’t make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.
PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that’s not due to some unique danger that only AI code has and human contributors don’t.
Code reviewing should be filtering out bad code whether it originates from an AI or a human.
But studies are showing it doesn’t work.
A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
And yes, plenty of human coders fall into the second bracket, as well.
But AI is very good at writing code that looks right. Code review is a good and necessary tool, but the data tells us code review isn’t solving the problem of bugs introduced by AI generated code.
I don’t have an answer, but “just use code review” probably isn’t it. In my opinion, “never use AI code assist” also isn’t the answer. There’s just more to learn about it, and we should proceed with drastically more caution.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
That’s still on the human that opened the PR without doing the slightest effort of testing the AI changes though.
I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.
I’m no fan of AI generally, but “AI Vulnerable” as a term just doesn’t make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.
PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that’s not due to some unique danger that only AI code has and human contributors don’t.
But studies are showing it doesn’t work.
A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
And yes, plenty of human coders fall into the second bracket, as well.
But AI is very good at writing code that looks right. Code review is a good and necessary tool, but the data tells us code review isn’t solving the problem of bugs introduced by AI generated code.
I don’t have an answer, but “just use code review” probably isn’t it. In my opinion, “never use AI code assist” also isn’t the answer. There’s just more to learn about it, and we should proceed with drastically more caution.
That’s still on the human that opened the PR without doing the slightest effort of testing the AI changes though.
I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.