☆ Yσɠƚԋσʂ ☆

  • 5.13K Posts
  • 8.11K Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle






  • I’m not talking about a scenario of a hypothetical exploit here. I’m talking about a concrete scenario where somebody finds an exploit and verifies it. In that case, people operating the software need to be aware of the vulnerability in the application they are running. Since the exploit is very easy to find, it should be assumed that malicious people would have found it as well.

    You’re arguing against a case where you have an unverified exploit that LLM might’ve hallucinated. This is not the case I’m describing. And this provably did not happen in my case as is clearly evidenced by the fix the dev had to make in their server.





  • I think so as well. These tools work both ways, and a project maintainer is in a much better position to use them effectively than a random attacker by virtue of having a deep understanding of what the code is doing. LLM is just a tool that helps you dig through a huge volumes of information, like a large codebase, and surface things that might be of interest. You still need a human to understand what it surfaces and to take meaningful action.

    Hopefully this kind of stuff does get people thinking about security a bit more, and how LLMs can be used to help surface issues.




  • I’m basing this on the demonstration I provided yesterday and the fallout we see from it. Piefed had to go down for maintenance as a result of these vulnerabilities https://lemmy.ml/post/47393443 and I linked to the dev having to apply the fix in the post.

    These were real security issues that anybody could find and exploit with very little effort or programming knowledge. We also don’t know that hasn’t been discovered or abused until I surfaced it. If there is a vulnerability that’s trivial to find and exploit, it should very much be assumed that people are doing so.

    Rimu being an ass was a contributing factor that motivated me to look at the codebase. But having reflected on it, I stand by my position that raising awareness of the issue and warning people federating with piefed is far more important in this kind of scenario. If this was a difficult to find exploit that nobody would be reasonably expected to have access to in the wild, then the calculus would be to notify the maintainer and let them fix it quietly on their own time. However, when the likelihood is that this is something that people would already be exploiting because the bar for discovering the flaw is so low, warning the public becomes a bigger concern.