Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.
I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There’s no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.
What is “AI vulnerable”? What is the problem here? Claude isn’t reverse-Midas, it’s not like everything they touch turns to shit.
Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.
Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.
I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There’s no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.