The Internet being mostly broken at this point is driving me a little insane, and I can’t believe that people who have the power to keep a functioning search engine for themselves wouldn’t go ahead and do it.
I wonder about this every time I see people(?) crowing about how amazing AI is. Like, is there some secret useful AI out there that plebs like me don’t get to use? Because otherwise, huh?
There’s something important to understand about LLMs. You need to imagine them as a crowd of 1,000 people, who you use an algorithm on, to get close to the most popular opinion on the answer.
No. They’re drinking their own coolaid.
They’ve offloaded what little thinking they did to LLMs (not that LLMs can think, but in this case it makes no difference), and at this point would no longer be able to function if they had to think for themselves.
Don’t think of them as human people with human needs.
They’re mere parasites, all higher functions withered away through lack of use, now more than ever.
They could die and be replaced by their chatbots, and we wouldn’t notice a difference.
They do have unshittified versions of their LLMs on aistudio.google.com where they are not bound by an ultra long system prompt.
Not the same. In the early days of chatgpt it would cheerfully tell you how to make a bomb.
Yes.
Without question.
Nah.
another thing to mention, YOUTUBE. The search bar doesn’t even do anything, it shows RECOMMENDATIONS instead of answers to the search.
Paying doesn’t even stop that! It’s actually maddening
Even if i put the url for a specific video on my TV YouTube it doesn’t find it.
type in before:2027 at the end of your search for a much more palatable experience
How do you use the search bar in youtube? I put in topics or keywords, which seem to work just fine.
Are you putting in whole questions? I’m not sure the search function is designed to worked like that
I saw a cool video here once. Typed the EXACT title to YouTube (including caps in the right words) and it didn’t show up. Only the big channels around that topic.
Personal rant that is still related, but not needed
Hell, I play trackmania turbo. Its still getting new videos from the community. 3-5 vidoes a week. Look it up, and some of the first results are a non-turbo player uploading 1 video 4 years ago. But since that channel is getting millions of videos, THAT video is promoted, not the fans still playing now.
Unlisted videos don’t show up in the search as far as I know.
YouTube doesn’t have a search bar. It’s for requesting recommendations on the feed. Search an obscure singer once in your life? For the next 6 months he will be present in your feed.
If you need to do search you have to use newpipe or similar alternatives
Probably controversial but Youtube is my least favourite search because it doesn’t tie in to your Google search at all. Like you search something on Google but YT doesn’t know that so the results are completely different. I WANT it to be fed my normal search history for context, what even is the point of having an interconnected ecosystem and being logged in to Google? Otherwise I’d just stick to DDG
I assume all high-level positions in tech companies are using better versions than what they shove out to the rest.
I mean, Microsoft treats Enterprise users with class with Windows 10 Enterprise. That version doesn’t have nearly the amount of bloat that even Professional has. Hell, Enterprise doesn’t even have that stupid online search function.
So it’s like they KNOW they have greenlit some shitty ideas, but, they won’t deal with it so why not just throw it all onto others to make their experiences miserable?
The LLM? Yes, actually, and it’s not secret:
The “preview” version are often pretty good, before Google deep fries them with sycophantic RHLF. For example, Gemini 2.0 and 2.5 Pro both peaked in temporary experimental versions, before getting worse (and benchmark maxxed) in subsequent updates.
But if you really want unenshittified LLMs, look into open weights models like GLM. They’re useful tools, and locally runnable. They are kind of a “secret useful AI out there that plebs don’t get to use” because of the software finickiness and hardware requirements to run it locally make it difficult.
On top of that, Google employees probably have access to “teacher” models and such that the public never gets to see.
For search? IDK. I’m less familiar on what Google does internally, but honestly, from what I’ve read, the higher ups are drinking kool-aid and assert all is fine.
I dont think any of these tech execs (all execs?) use their products. They all have assistants to do everything for them, so they have no idea what this whole “internet” thing is, other than it makes them money.
Oh they know how to get to the porn.
No, porn is for poor people. These people can (and do) easily afford the live action version.
Thot-on-retainer
Pretty sure they don’t bother with just photos, I’m sure there’s some guy that’s replaced the one that died in prison.
Pretty sure they don’t bother with just photos, I’m sure there’s some guy that’s replaced the one that
diedwas murdered in prison.FTFY
is this talking about Epstein?
mmhmm.
Once you get above a certain value in your bank account, laws stop applying to you unless you do something catastrophically stupid as to threaten the whole scheme, then you’re hung out as a sacrificial lamb to protect the others.
I guarantee epsteins cartel never died. its headquarters just moved, and the leader changed. thats all.
The only times laws effect rich people are when they screw over other rich people. Elizabeth Holmes didn’t get in trouble for selling incorrect blood tests to patients. She got in trouble for defrauding other rich people.
Yeah, they just said that their assistants handle everything. Little bit of mood lighting, little bit of a new BMW, little bit of a cover up and hush money. All expensed to the company. Ahh, the good life.
They probably use exclusive paid sites. Maybe even … very exclusive.
they probably use hookers? porn is for the poor people.
1k usd? that is actually cheaper than what i would expect in these circles…
Those assholes probably kept a working version of Inbox for themselves. 😡
What did inbox improve on?
There is Kagi for the rest of us
Yeah this is what I was thinking, they use Kagi or something like it.
I wish they also had an LLM. I need a paid backup for when ChatGPT enshittifies. That won’t eventually extort me like VC backed corpos will
They do offer access to ALL the LLMs. You could pay $20 to OpenAI for only ChatGPT 5, or you could pay $25 to Kagi for unlimited Kagi search and access to premium models for ChatGPT, Claude, Gemini, Mistral, Kimi, and others. If you do annual, I think it’s cheaper.
Are you talking about Lumo or is there another way to access their models?
It’s called Kagi Assistant. You can choose any model. For premium models, you need to upgrade.
Lumo is a proton thing, but they use Mistral models, and you can use those models with Kagi.
…you literally just click the menu and choose the model you want? Or is my UI different than others?
Likely a different UI

The problem with conspiracy theories like this is it assumed that the people doing the conspiracy are competent.
every time I see people(?) crowing about how amazing AI is
You’re correct that there’s a massive flood of bots pushing it everywhere. But regardless of what the subject is, once someone has “bought in” to a scam they tend to stick with it and defend it no matter what. Because the alternative is admitting they were fooled, and that’s basically an uncrossable bridge for most folks.
People on their literal death beds were using their literal last words (before being intubated with covid) to threaten nurses not to go near them with “the jab”. So it really doesn’t surprise me that people continue using “AI” despite it being worse than worthless for literally everything
people continue using “AI” despite it being worse than worthless for literally everything
On the contrary, AI’s been extraordinarily useful for me this year. BUT-- I try my best to understand its ins and outs… i.e., where it’s most accurate, and when it’s most prone to hallucinating confidently incorrect replies.
Pretty much any tool has a narrow range of uses, and is useless for everything else. So it kinda makes sense to me that a ‘do all’ tool would naturally have plenty of flaws in its early stages.
Yeah we’re 3 years past where the scam artists claimed this shit will have already evolved out of its early stages. It ain’t happening.
It takes no effort at all to understand the ins and outs btw. It’s “accurate” for the most abundantly well documented problems you used to solve in half the time by just copy/pasting from stack overflow. The rest of the time it contradicts the advice your mom’s doctor gave her. Sooooo useful wow. But sure shoot your shot, I’d love to hear about how you used it to build a grocery list app or whatever you’re so excited about
Huh, well aren’t you absolutely dripping with cynicism.
Anyway, GPT5.1 and 5.2 have been hugely useful to me across a variety of topics, mainly functioning as a sort of enhanced search engine. For example, loads of searches would take me a lot of time to fully explore the various nuances of that GPT can typically pull together in a coherent way in only seconds.
Probably the most use of all is in helping with my issues that come from learning French, but it’s also helped me with graphics tasks that would have taken me ages in GIMP, helped ID various unknown animals, helped with stats analysis in sports, and easily 2-3 dozen other things I’ve needed help with in the past year.
So you can hang on to your sourpuss attitude all you like, while I reap the growing number of benefits AI has offered me. And no-- that doesn’t mean I love AI or don’t see its very real dangers down the road to human workforces. That stuff I’m indeed very concerned about in the late-stage capitalist reality we live in.
Okay so this time it’s “I like to fuck around with irrelevant sports statistics and I’m really excited to eventually humiliate myself the first time I try my French with a native speaker”
Listen. Thanks for sharing. I genuinely mean that btw I’m aware I’m being abrasive but you’re putting out your perspective in good faith and I appreciate that. That being said I obviously strongly disagree with any of what you’ve listed being a “benefit”, and I especially caution you against considering SlopGPT to be a “nuanced” source of information for your search queries.
There is a difference between believing you have benefitted and actually benefitting. It’s astonishing to me that you would be concerned about the capitalists yet trust them so adamantly in this moment to manage your very relationship with information. Regardless, I hope you have a happy and safe new year. Cheers
…yet trust them so adamantly in this moment to manage your very relationship with information.
That simply isn’t the case, whether or not you choose to believe it, or tot it up on the massive axe you obviously have to grind in these matters. For whatever reasons, you’re choosing to lump me in to a group of people I never belonged to in the first place, which is a -you- problem, and not mine.
And no, GPT is certainly not my primary learning aid upon French. In fact, it’s one of about half a dozen tools I use, which I’m constantly cross-checking against each other for accuracy, forming an overall highly-useful ‘teacher’ of sorts. So when you lead with that pedantic little bit of fluff, what you’re really telling me is that you have no idea what you’re talking about. You seem to see these things in highly binary terms, once again a “you” problem.
Yeah sure, bonne année and cheers, mate.
“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.” ― Carl Sagan
Yeah, that’s probably the case. But hell, the bots are working. They got me second-guessing myself and wondering if I just haven’t seen the “good” AI that the elites are keeping for themselves.
I fully believe in old.Google.com existing now
Old Google was the tits. Even five years ago Google would be hyper advanced technology today.
This is really the fear we should all have. And I’ve wondered about this specifically in the case of Thiel, who seems quite off their rocker.
Some things we know.
Architectural, the underpinnings of LLM’s existed long before the modern crops. Attention is all you need is basic reading these days; Google literally invented transformers, but failed to create the first llm. This is important.
Modern LLM’s came through basically two aspects of scaling a transformer. First, massively scale the transformer. Second, massively scale the training dataset. This is what OpenAI did. What google missed was that the emergent properties of networks change with scale. But just scaling a large neural network alone isn’t enough. You need enough data to allow it to converge on interesting and useful features.
On the first part, of scaling the network. This is basically what we’ve done so far, along with some cleverness around how training data is presented, to create improvements to existing generative models. Larger models, are basically better models. There is some nuance here but not much. There have been no new architecural improvements that have resulted in the kind of order of magnitude scaling in improvement we saw in the jump from lstm/GAN days, to transformers.
Now what we also know, is that its incredibly opaque what is actually presented to the public. Open source models, some are in the range of 100’s of billions of parameters Most aren’t that big. I have quen3-vl on my local machine, its 33 billion parameters. I think I’ve seen some 400b parameter models in the open source world, but I haven’t bothered downloading them because I can’t run them. We don’t actually know how many billion parameters models like Opus-4.5 or whatever shit stack OpenAI is sending out these days. Its probably in the range of 200b-500b, which we can infer based on the upper limits of what can fit on the most advanced server grade hardware. Beyond that, its MoE, multiple models on multiple GPU’s conferring results.
What we haven’t seen is any kind of stepwise, order of magnitude improvement since the 3.5-4 jump open AI made a few years ago. Its been very… iterative, which is to say, underwhelming, since 2023. Its very clear that an upper limit was reached and most of the improvements have been around QoL and nice engineering, but nothing has fundamentally or noticeably improved in terms of the underlying quality of these models. That is in and of itself interesting and there could be several explanations of this.
Getting very far beyond this takes us beyond the hardware limitations of even the most advanced manufacturing we currently have available to us. I think the most a blackwell card has is ~288GB of VRAM? Now it might be at this scale we just don’t have hardware available to even try and look over the hedge to see what or how a larger model might perform. This is one explanation: we hit the memory limits of hardware and we might not see a major performance improvement until we get into the TB range of memory on GPU’s.
Another explanation, could be that at the consumer level, they stopped throwing more compute resources at the problem. Remember the MoE thing? Well these companies, allegedly, are supposed to make money. Its possible that they just stopped throwing more resources at their product lines, and that more MoE does actually result in better performance.
In the first scenario I outlined, executives would be limited to the same useful, but kinda-crappy LLM’s we all have access to. In the second scenario, executives might have access to super powered, high MoE versions.
If the second scenario is true and when highly clustered, llm’s can demonstrate an additional stepwise performance improvement, then we’re already fucked. But if this were the case, its not like western companies have a monopoly on GPUs or even models. And we’re not seeing that kind of massive performance bump elsewhere, so its likely that MoE also has its limits and they’ve been reached at this point. Its also possible we’ve reached the limits of the training data. That even having consumed all of 400k’s years of humanities output, and its still too dumb to draw a full glass of wine. I don’t believe this, but it is possible.
Don’t forget the fundamental scaling properties of llms, that openai even used as the basis for strategy to make chat gpt 3.5.
But basically llm performance is logarithmic. It’s easier to get rapid improvements early on. But at later points like we are now require exponentially more compute, training data, and model sizes to get now small level of improvements.
Even if we get a 10x in compute, model size, and training data (which is fundamentally finite), the improvements aren’t going to be groundbreaking or solve any of the inherent limitations of the technology.
Diminishing returns
💯
Scaling is a quick and dirty way to get performance improvements. But there is no guarantee that we get any more interesting behavior or don’t get like, wildly more interesting behavior with an additional 10x’ing. The fact is we simply don’t know what emergent properties might exist at a networks size we simply phsyically can’t scale to right now. Its important to not assume things like “its just going to be diminishing returns”, because while that is most likely the case, precisely this thinking is why google wasn’t the first to make an LLM, even though they had discovered/ invented the underlying technology. Yet another 10x scaling didnt result in just diminishing returns, but fundamentally new network properties.
And that principal holds across networked systems (social networks, communication networks, fungal and cellular communication networks). We truly do not know what will result from scaling the complexity of the network. It could move the needle from 95% to 96.5% accuracy. Or it could move it to a range of accuracy that isn’t measurable in human terms (its literally more accurate than we have the capability of validating). Or it could go from 95% to 94%. We simply don’t know.
If a small group of people get to outcompete the rest of society thanks to having exclusive access to more powerful LLMs, I’m sure (🤞) that would lead to a lot of unrest. Not only from us plebs, but also from the rest of business. I can see it leading to some lawsuits concluding that AI advancement must be shared (if at a fee) with the public.
Its also possible we’ve reached the limits of the training data.
This is my thinking too. I don’t know how to solve the problem either because datasets created after about 2022 likely are polluted with LLM results baked in. With even a 95% precision that means 5% hallucination baked into the dataset. I can’t imagine enough grounding is possible to mitigate that. As the years go forward the problem only gets worse because more LLM results will be fed back in as training data.
I mean thats possible, but I’m not as worried about that. Yes it would make future models worse. But its also entirely plausible to just cultivate a better dataset. And even small datasets can be used to make models that are far better at specific tasks than an any generalist llm. If better data is better then the solution is simple: use human labor to cultivate a highly curated high quality dataset. I mean its what we’ve been doing for decades in ML.
I think the bigger issue is that transformers are incredibly inefficient about their use of data. How big of a corpus do you need to feed into an llm to get it to solve a y =mx+b problem? Compare that to a simple neural network or a random forest. For domain specific tasks they’re absurdly inefficient. I do think we’ll see architectural improvements, and while the consequences of improvements has been non-linear, the improvements themselves have been fairly, well, linear.
Before transformers we basically had GAN’s and LSTM’s as the latest and greatest. And before that UNET was the latest and greatest (and I still go back to, often), and before that basic NN’s and random forest. I do think we’ll get some stepwise improvements to machine learning, and we’re about due for some. But its not going to be tittering at the edges. Its going to be something different.
The only thing that I’m truly worried about is that if, even if its unlikely, if you can just 10x the size of an existing transformer (say from 500 billion parameters to 5 trillion, something you would need like a terabyte of vram to even process), if that results in totally new characteristics, in the same way that scaling from 100 million parameters to 10 billion resulted in something that, apparently, understood the rules of language. There are real land mines out there that none of us as individuals have the ability to avoid. But the “poisoning” of the data? If history tells us anything, its that if a capitalist thinks it might be profitable, they’ll throw any amount of human suffering at it to try and accomplish it.
If model collapse is such an issue for LLMs, then why are humans resistant to it? We are largely trained on output created by other humans.
Are we? Antivax, anti science BS is largely due to Russia poisoning our dataset.
Thanks for the detailed answer!
It doesn’t take a lot of tech skill to be an exec for a tech company. My guess is they fall into the same category as those that see the AI overview on Google and think “wow this is so much better” and never second guess the results.















