

Confirmed or not, get better sources. Equipment damaged in a war is completely plausible and even inevitable. Propaganda is also inevitable, from either side of a conflict. In this case, an Indian source has proxied state news meant for Iranians.
Trying to sort out their mix of AI slop posts from legit unbiased news isn’t worth my time, even if the news is proxied by another source. (Indian news is generally shit as well, I just ignore that by default. If you actually want extreme sensationalized trash, then good on you.)












Most of this is just marketing crap from Anthropic.
Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any “guardrails” for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.
They likely just trained a model without guardrails in this case.
What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don’t really understand.