The entire US economy is currently being propped up by growth in the AI/tech sector. And I am convinced that LLMs are fundamentally incapable of delivering on the promises being made by the AI CEOs. That means there is a massive bubble that will eventually burst, probably taking the whole US economy with it.
Let’s say, for sake of argument, that I am a typical American. I work a job for a wage, but I’m mostly living paycheck to paycheck. I have maybe a little savings, and a retirement account with a little bit in it, but certainly not enough that I can retire anytime in the near future.
To what extent is it possible for someone like me, who doesn’t buy into the AI hype, to insulate themselves from the negative impact of the eventual collapse?
I have no advice but I’ve been thinking the same way. I like LLMs, I use LLMs, but the “shove an LLM into every product and call it more valuable” approach is not sustainable and it will fail. Hopefully not as a full on bubble collapsing economy thing, but it’s only a matter of time (I’d guess a year tops) until companies have to start admitting to losses and investors start retreating.
Hopefully someone with some decent economic knowledge will drop some advice, but frankly I doubt anyone can do much better than guess (or parrot old advice) what will be least impacted. Intuitively tech stocks are the ones that will be hurt, maybe it’s manufacturing stuff that will stay more stable, but it’s all such a complicated web of interdependency who knows.
It also depends what you mean by “hurt.”
Nvidia/AMD stocks, for example, are going to drop like meteors, but the physical companies themselves will be fine. They’re like the picks and shovels makers of the California Gold Rush; they’ve made their piles of cash and will go back to business as usual. Hence, I’m not selling my long-held AMD, even though I’m certainly not buying more. Yet.
And some totally unrelated companies may be disproportionately hurt by the pullback of a serious recession.
One comment I will make on LLMs specifically is it’s more of a “race to the bottom” than you’d think. Between sparsity research (with the recent MoE trend being a rather crude stopgap if you ask me), alternate attention schemes, finetuning advancements, computing shifts like BitNet all in the pipe, and all the open models from China and others, well…
The end point feels like local inference of specialized, freely licensed models. As useful, niche tools, not superintelligence.
They’re low power basically free, hence seemingly inevitable.
That’s utterly apocalyptic if you’re someone like Sam Altman or Jeff Bezos. OpenAI can’t make money off that, which is why they’re lobbying to kill it and preaching infinite scaling that won’t work.