I was thinking a bit about the bugs I found in the Piefed codebase yesterday. And these led to an emergency fix by the dev that’s now been implemented. https://codeberg.org/rimu/pyfedi/commit/093a466935849f27b3ecf2eab159129186320417
And what the real takeaway for me here is that the whole dynamic of how we approach security has now changed in ways most people don’t appreciate.
It used to take a lot of effort to find exploits in software projects because you’d have to spend a long time to familiarize yourself with the codebase, then comb through the code looking for mistakes that could be exploited. And to even do that, you’d need a good understanding of the protocols and specifications used by the application.
You basically had to be a domain expert with a deep understanding of how the application works. A random person looking at the source code would have little chance of finding any non trivial problems or figuring out how to actually exploit them.
And in that world, doing a private disclosure made a lot of sense because you did a lot of hard work to find it, and it wasn’t easy for somebody to replicate. This was valuable and dangerous knowledge that had to be communicated in a responsible fashion.
But now, anybody can throw an LLM at the code and it’ll sniff out vulnerabilities and even explain step by step how to exploit these security holes. So, the information itself isn’t really that valuable anymore. If I can throw an LLM at the code and find these problems in a few minutes, anybody else can do the same thing too.
I’m not a Python developer, I don’t have any deep knowledge of the Python stack used in Piefed, and on my own, I’d have zero chance of finding these exploits. But once the LLM identifies them, it’s very easy for me to verify that they are indeed real exploits, and to realize how they can be used maliciously.
The attacker doesn’t even need to have any deep knowledge of programming because the LLM can guide them through the exploit step by step.
Open source projects are particularly vulnerable here since anybody can just grab the source and throw an LLM at it to see if it can find exploits.
I’d argue that raising awareness that this is now the state of things is really important, and I would suggest that running an LLM against the code is minimal due diligence at this point.
Obviously, the LLM vulnerability check is not exhaustive, and if it doesn’t find anything that doesn’t mean there aren’t exploits in the code. But anything it does find should absolutely be checked by the developers.
People should be aware that we’re now living in the world where the bar for finding vulnerabilities is far lower than it used to be. And that means security must be taken far more seriously.


is this something others agree on as well? Is this how the industry is changing? I don’t know what you mean by “takeaway”, what are you basing this lesson on?
I however still think it irresponsible. Even if it is very easy, the fact that it wasn’t discovered or abused meant no one was looking, but posting it publically made piefed a target. It also meant people didn’t have time to assess and react but everything had to happen now and suddenly and a fix had to be rushed.
To be blunt, since this comes on the heels of rimu making a huge ass of himself, it looks like (whether true or not i cant know) a dunk on rimu with little consideration to how others might be affected or who was put at risk. Especially to those who don’t know the ins and outs of responsible disclosure practices. And your first response being “If the lead developer was a decent human being, I probably would’ve handled this differently.” kind of makes it hard to shake this feeling.
Yes this whole situation should have been handled in a much more mature manner.
I’m basing this on the demonstration I provided yesterday and the fallout we see from it. Piefed had to go down for maintenance as a result of these vulnerabilities https://lemmy.ml/post/47393443 and I linked to the dev having to apply the fix in the post.
These were real security issues that anybody could find and exploit with very little effort or programming knowledge. We also don’t know that hasn’t been discovered or abused until I surfaced it. If there is a vulnerability that’s trivial to find and exploit, it should very much be assumed that people are doing so.
Rimu being an ass was a contributing factor that motivated me to look at the codebase. But having reflected on it, I stand by my position that raising awareness of the issue and warning people federating with piefed is far more important in this kind of scenario. If this was a difficult to find exploit that nobody would be reasonably expected to have access to in the wild, then the calculus would be to notify the maintainer and let them fix it quietly on their own time. However, when the likelihood is that this is something that people would already be exploiting because the bar for discovering the flaw is so low, warning the public becomes a bigger concern.