Well, without Israelis causing trouble, the Saudis and Iranians would still be jousting in proxy wars across the region, Turkiye would still be trying to kill the Kurds, and Syria would still be locked in civil war.
Maybe I’m ignorant or thinking of older conflicts, but isn’t all of the above still happening?
You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?
AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.