They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.
There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam…
If you don’t have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).
Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.
The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of America.
Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.
less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you’re going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.
Personally I dislike how helpless and useless it makes my fellow colleagues at research.
No thought given, use the first web result (and in most cases, just accept the AI output as search gospel).
In my case it’s only used for very obscure issue descriptions my google-fu isnt sufficient enough for or correlating weird bugs with each other.
Dumb question, but…is Claude worse than GPT or Gemini?
I was under the impression that it was the lesser of evils
They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.
Anthropic’s AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/
The company is suing to be able to supply the US military again.
There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam…
If you don’t have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).
Or you could just not use LLMs. Fuck AI.
Other than wanting a verbose answer to a question, what is it for?
It is better than GPT and Gemini but not great. Claude some US military contracts. At least to public knowledge.
https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html
less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you’re going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.
In what manner? Capabilities, or belonging to an evil corporation that happily steals data and works to undermine democracy?
Anthropic’s AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/
The company is suing to be able to supply the US military again.
No. Many people here just hate LLMs in general and will use every opportunity to complain about it.
Personally I dislike how helpless and useless it makes my fellow colleagues at research.
No thought given, use the first web result (and in most cases, just accept the AI output as search gospel).
In my case it’s only used for very obscure issue descriptions my google-fu isnt sufficient enough for or correlating weird bugs with each other.
I’d say 99.9% of people. You’re actually the first other person I’ve seen who doesn’t!
Yeesh, get out more.
I hate ai and junior devs pushing ai dente spaghetti code into my project