I don’t know where are we on ethnic cleansing when the current Head of the Office of the President of Ukraine keeps a notebook of subraces. I guess we’ll never know what this could possibly mean.
I don’t know where are we on ethnic cleansing when the current Head of the Office of the President of Ukraine keeps a notebook of subraces. I guess we’ll never know what this could possibly mean.


It would all depend on the rate of false positives, and as you say, we’d have to wait and see how this plays out. At the very least, what I’d want people to take away from this is that project maintainers absolutely should be using these tools themselves. They’re the people who are in the best position to decide whether something is a real issue, and it’s better to be safe than sorry here.
Sure buddy, the president saying the regime aims to commit ethnic cleansing while they openly attack civilians doesn’t prove shit. You keep on telling yourself that. I love also immediately rush to how you dehumanize the victims as ‘pro-Russian’ militants. Oh and of course, we can just look at what’s happening in Ukraine today, I can only assume your firm support must be over shared values https://nv.ua/ukr/lifestyle/spisok-pidarasiv-u-budanova-pomitili-nezvichniy-bloknot-50603357.html
You’re right Poroshenko saying they will live in cellars and Ukraine bombing civilian population since 2014 absolutely does not say ethnic cleansing. There’s a literal statement of intent there to remove Russian speaking population. And then there’s a genocide case at the UN. I bet you haven’t even opened a single link before writing your vapid comment.


truly


I’m not talking about a scenario of a hypothetical exploit here. I’m talking about a concrete scenario where somebody finds an exploit and verifies it. In that case, people operating the software need to be aware of the vulnerability in the application they are running. Since the exploit is very easy to find, it should be assumed that malicious people would have found it as well.
You’re arguing against a case where you have an unverified exploit that LLM might’ve hallucinated. This is not the case I’m describing. And this provably did not happen in my case as is clearly evidenced by the fix the dev had to make in their server.
allegations is doing a lot of heavy lifting there, meanwhile what Ukrainian regime has been up to is very well documented:
My advice would be to just go for it. If you find a bug and fix it or add a useful feature, it’s absolutely worth submitting. And collaborating with other devs will help you grow your skills a lot faster.


I think so as well. These tools work both ways, and a project maintainer is in a much better position to use them effectively than a random attacker by virtue of having a deep understanding of what the code is doing. LLM is just a tool that helps you dig through a huge volumes of information, like a large codebase, and surface things that might be of interest. You still need a human to understand what it surfaces and to take meaningful action.
Hopefully this kind of stuff does get people thinking about security a bit more, and how LLMs can be used to help surface issues.
Yes, it’s very interesting how you’re completely fine with the ethnic cleansing the fascist regime has been doing in Donbas since 2008.
I find a lot of people in tech end up with imposter syndrome like this, but the reality is that most code in the wild is really terrible.


I’m basing this on the demonstration I provided yesterday and the fallout we see from it. Piefed had to go down for maintenance as a result of these vulnerabilities https://lemmy.ml/post/47393443 and I linked to the dev having to apply the fix in the post.
These were real security issues that anybody could find and exploit with very little effort or programming knowledge. We also don’t know that hasn’t been discovered or abused until I surfaced it. If there is a vulnerability that’s trivial to find and exploit, it should very much be assumed that people are doing so.
Rimu being an ass was a contributing factor that motivated me to look at the codebase. But having reflected on it, I stand by my position that raising awareness of the issue and warning people federating with piefed is far more important in this kind of scenario. If this was a difficult to find exploit that nobody would be reasonably expected to have access to in the wild, then the calculus would be to notify the maintainer and let them fix it quietly on their own time. However, when the likelihood is that this is something that people would already be exploiting because the bar for discovering the flaw is so low, warning the public becomes a bigger concern.
when you definitely understand how allegory works
flies are drawn to manure
When you ban everybody who disagrees with you that makes it hard to communicate with you losers. Warning people who are federating with your malware instance is a public service though.
There was no valuable secret information here, literally anybody with access to an LLM could find this trivially. The fact is that your ‘devs’ didn’t bother doing even a minimal due diligence here. I guess can’t expect fascists to be competent.
I threw an LLM at pyfedi code yesterday and it found a whole bunch of catastrophic security problems. So they had to take the server down and actually fix their shitty code. Piefed is complete amateur hour.
https://lemmy.ml/post/47393443
https://codeberg.org/rimu/pyfedi/commit/093a466935849f27b3ecf2eab159129186320417
Ah yes, my best efforts of spending whole 5 minutes of my time showing how your codebase is a shitshow with zero consideration for security. Be thankful that I found them and published them, and it wasn’t somebody actually malicious who found them first and exploited them.
Wait till you find out that the human brain is also a non-deterministic system.