Bitcoin needs ASICs to run it. GPUs haven’t been viable there for a long time, and ASICs aren’t useful for anything other than the thing they’re designed to do.
Ethereum used GPUs until it went proof-of-stake, but it was always smaller than Bitcoin.
Nothing else is big enough to have caused a bubble.
Most of the AI training is being done in brand new datacenters on brand new GPUs. Those Ethereum GPUs mostly got dumped on eBay.
The bubble isn’t just the thing itself, its all the services and infrastructure that built up around it. Crypto doesn’t have to be awful, its just a tool, a ledger. But it exists in its current form because of what it creates, what kind of economic activity it stimulates. My point is that all the grifty ass economic hype activity around crypto just moved to AI
But I do appreciate the clarification, it gives me a chance to refine my perspective
The crypto bubble is the ai bubble. AI was the answer to “what are we going to do with all these chips and servers now that crypto crashed?”
Bitcoin needs ASICs to run it. GPUs haven’t been viable there for a long time, and ASICs aren’t useful for anything other than the thing they’re designed to do.
Ethereum used GPUs until it went proof-of-stake, but it was always smaller than Bitcoin.
Nothing else is big enough to have caused a bubble.
Most of the AI training is being done in brand new datacenters on brand new GPUs. Those Ethereum GPUs mostly got dumped on eBay.
Actually…
Crypto mining requires high compute throughput.
LLM’s require high VRAM.
Very different objectives and hardware in mind.
The bubble isn’t just the thing itself, its all the services and infrastructure that built up around it. Crypto doesn’t have to be awful, its just a tool, a ledger. But it exists in its current form because of what it creates, what kind of economic activity it stimulates. My point is that all the grifty ass economic hype activity around crypto just moved to AI
But I do appreciate the clarification, it gives me a chance to refine my perspective
Running the model? Sure. Training the model still requires high compute throughput.