I was thinking a bit about the bugs I found in the Piefed codebase yesterday. And these led to an emergency fix by the dev that’s now been implemented. https://codeberg.org/rimu/pyfedi/commit/093a466935849f27b3ecf2eab159129186320417
And what the real takeaway for me here is that the whole dynamic of how we approach security has now changed in ways most people don’t appreciate.
It used to take a lot of effort to find exploits in software projects because you’d have to spend a long time to familiarize yourself with the codebase, then comb through the code looking for mistakes that could be exploited. And to even do that, you’d need a good understanding of the protocols and specifications used by the application.
You basically had to be a domain expert with a deep understanding of how the application works. A random person looking at the source code would have little chance of finding any non trivial problems or figuring out how to actually exploit them.
And in that world, doing a private disclosure made a lot of sense because you did a lot of hard work to find it, and it wasn’t easy for somebody to replicate. This was valuable and dangerous knowledge that had to be communicated in a responsible fashion.
But now, anybody can throw an LLM at the code and it’ll sniff out vulnerabilities and even explain step by step how to exploit these security holes. So, the information itself isn’t really that valuable anymore. If I can throw an LLM at the code and find these problems in a few minutes, anybody else can do the same thing too.
I’m not a Python developer, I don’t have any deep knowledge of the Python stack used in Piefed, and on my own, I’d have zero chance of finding these exploits. But once the LLM identifies them, it’s very easy for me to verify that they are indeed real exploits, and to realize how they can be used maliciously.
The attacker doesn’t even need to have any deep knowledge of programming because the LLM can guide them through the exploit step by step.
Open source projects are particularly vulnerable here since anybody can just grab the source and throw an LLM at it to see if it can find exploits.
I’d argue that raising awareness that this is now the state of things is really important, and I would suggest that running an LLM against the code is minimal due diligence at this point.
Obviously, the LLM vulnerability check is not exhaustive, and if it doesn’t find anything that doesn’t mean there aren’t exploits in the code. But anything it does find should absolutely be checked by the developers.
People should be aware that we’re now living in the world where the bar for finding vulnerabilities is far lower than it used to be. And that means security must be taken far more seriously.


your conclusion is definitely correct. the good thing is that if these LLMs can be used offensively to find exploits, they can also be used defensively to find (and potentially even fix) the same ones.
I’m still of the opinion that AI is a tool that amplifies the abilities of the user, so there will be a mismatch of capability depending on that.
with that argument in mind, open source has the potential to be a huge receiver of benefit, if these tools are used to aid development in that way.
overall, what I think that means (what I hope it means) is that we’ll see a much more broad interest in learning secure development as AI research progresses and these tools become more widespread in usage.
I think so as well. These tools work both ways, and a project maintainer is in a much better position to use them effectively than a random attacker by virtue of having a deep understanding of what the code is doing. LLM is just a tool that helps you dig through a huge volumes of information, like a large codebase, and surface things that might be of interest. You still need a human to understand what it surfaces and to take meaningful action.
Hopefully this kind of stuff does get people thinking about security a bit more, and how LLMs can be used to help surface issues.