The whole article is fixated on Grok being far right and never seems to care that an LLM is citing another LLM instead of an actual source.
I disagree. The focus of the article is misinformation. It’s literally the first sentence:
The latest model of ChatGPT has begun to cite Elon Musk’s Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.
Yes, Grokipedia is right-wing. It was literally created to alter reality and spread lies that agree with their worldview! But the real problem is it can’t be edited with sourced, fact-based information, instead AI generates everything. I think the article did explore the fact that it’s one LLM depending on another…
If there was ever a difference between being far-right and being disinformation, there isn’t one anymore.
Half the web is going to be another llm soon
I mean if they are going to be doing model collapse, might as well go full throttle
AI pollution happening in real time
LLMs are good for searching in technical documentation. And that’s it. They are barely usable outside this niche.
Stop using it for “humanitarian” purposes. It can’t be a psychologist, lawyer or anything else similar.
LET IT DIE, LET IT DIE! LET I SHRIVEL UP AND DIE!




