I recently learned that Britain is spending £36 million to upgrade a supercomputer:

https://www.bbc.com/news/articles/c79rjg3yqn3o
Can’t you buy a very powerful gaming computer for only $6000?
CPU: AMD R9 9950X3D
Graphics: Nvidia RTX 5080 16GB
RAM: 64GB DDR5 6000MHZ RGB
https://skytechgaming.com/product/legacy-4-amd-r9-9950x3d-nvidia-rtx-5090-32gb-64gb-ram-3
This is how this CPU is described by hardware reviewers:
AMD has reinforced its dominance in the CPU market with the 9950X3D, it appears that no competitor will soon be able to challenge that position in the near future.
https://www.techpowerup.com/review/amd-ryzen-9-9950x3d/29.html
If you want to add some brutal CPU horsepower towards your PC, then this 16-core behemoth will certainly get the job done as it is an excellent processor on all fronts, and it has been a while since have been able to say that in a processor review
https://www.guru3d.com/review/ryzen-9-9950x3d-review-a-new-level-of-zen-for-gaming-pcs/page-29/
This is the best high-end CPU on the market.
Why would you spend millions on a supercomputer? Have you guys ever used a supercomputer? What for?
Thats not the best on the market. I’m not sure who sells what else but the threadripper series is far more powerful, and expensive.
Furry ai titties.
An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that’s more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn’t enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.
Supercomputers are not super because they’re bigger. They are super because they can do gargantuan tasks within the required deadlines.
Also space models and quantum models.
Here’s a vehicle analogy:
If I can get a hypercar for $3 million, why does a freight train cost $32 million? It’s not like it can go faster, and it’s more limited in what it can do.
Galaxy collisions, protein folding, mechanical design and much more. Big simulations of real world physics and chemistry that require massively parallel computation, problems that require insane numbers of calculations running on multiple machines that can pass data to each other very quickly.
Every supercomputing center has a website where you can read about the research being done on their machines.
No, these can’t be done at similar scale on your desktop. Your PC can’t do that many calculations in a reasonable time. Even on supercomputers, they can take weeks.
In molecular biology, they are used for instance to calculate/ predict protein-folding. This in turn is used to create new drugs.
The most high-end AMD 16-core CPU can’t do that 😬?
A supercomputer is not one computer. Its a big building filled from floor to ceiling with many computers that work together. It wouldnt have 16 cores, it would be more like thousands of cores all acting as one big computer towards a single computational task.
Oh noooh. The number of permutations needed is mind-boggling, bigger than the shoe collection of Imelda Marcos.
If your gaming computer can do x computations every month, and you need to run a simulation that requires 1000x computations, you can wait 1000 months, or have 1000 computers work on it in parallel and have it done in one month.
Keep in mind that not all work loads scale perfectly. You might have to add 1100 computers due to overhead and other dcaling issues. It is still pretty good though, and most of those clusters work on highly parallelised tasks, as they are very suited for it.
There are other work loads do not scale at all. Like the old joke in programming. “A project manager is someone that thinks that 9 women can have a child in one month.”
A super computer isn’t just a single computer, it’s a lot of them networked together to greatly expand the calculation scaling. If you can imagine a huge data center, with thousands of racks of hardware, CPUs, GPUs and RAM chips all dedicated to the tasks of managing network traffic for major websites, its very similar to that but instead of being built to handle all the ins and outs and complexities of managing network traffic, it’s purely dedicated to doing as many calculations for a specific task, such as protein folding as someone else mentioned, or something like Pixar’s Render Farm, which is hundreds of GPUs all networked together dedicated solely to the task of rendering frames.
With how big and complex any given 3d scenes are in any given Pixar film one single GPU might take 10 hours to calculate the light bounces in a scene to render a single frame, assuming a 90 minute run time, that ~130,000 frames, which is potentially 1,300,000 hours (or about 150 years) to complete just 1 full movie render on a single GPU. If you have 2 GPUs working on rendering frames, you’ve now cut that time down to 650,000 hours. Throw 100 GPUs at the render, we’ve cut time to 13,000 hours, or about a year and a half. Pixar is pretty quiet about their numbers but at least according to the Science of Pixar traveling exhibit during the time of Monster University in 2013, their render farm had about 2000 machines with 24,000 processing cores, and it took 2 years worth of rendering time to render that movie out, and I can only imagine how much bigger their render farm has gotten since then.
Source: https://sciencebehindpixar.org/pipeline/rendering
You’re not building a super computer to be able to play Crysis, you’re building a super computer to do lots and lots and lots of math that might take centuries of calculation to do on a single 16 core machine.
I’m a PhD student and several of my classmates use computing clusters in their work. These types of computers typically have a lot of CPUs, GPUs, or both. The types of simulations they do are essentially putting a bunch of atoms or molecules in a box and seeing what happens in order to get information which is impossible to obtain experimentally. Simulating beyond a few nanoseconds in a reasonable amount of time is extremely difficult and requires a lot of compute time. However, there are plenty of other uses.
The clusters we have would have dozens of these CPUs or GPUs and users would submit jobs to it which would run simultaneously. AMD CPUs have better performance than Intel and Nvidia GPUs have Cuda, which is incorporated into a lot of the software people use for these.
I’ve personally never used anything more than a desktop, though I might apply for some time soon because I’ve got some datasets where certain fits take up to two days each. I don’t want to sit around for a month waiting for these to finish
Run Linpack and see how many flops you get.
https://github.com/icl-utk-edu/hpl/Then compare it to the Top 500 list.
https://www.top500.org/lists/top500/I bet you are at least 3 orders of magnitude away from the bottom.
They are used to solve problems that would take years on a normal gaming PC.
Problems like what?
What are super-computers used for ?
Its used to build a giant bot network that makes brand new accounts to ask questions in various forums.
A huge factor is how much data you can process at a given time. Often, in the end it’s not that complicated per sample of data. But when you need to run on terra bytes of data (let’s say wide angle telescopes or CERN style experiment) you need huge computer to simulate your system accurately (How does the glue layer size impacts the data?) and process the mountain of data coming from it.
Nowaday, practically speaking it’s just a building full of standard computers and software process dispatching the load between the machines (which isn’t trivial especially when you do mass parallel processing with shared memory)
a supercomputer is usually just a lot of computers on the same (fast) network.
a lot of science and research happens there.
Think of a gaming PC from the 90s. And now imagine asking the same then.



