Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.
Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.