• mindbleach@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    Gene Amdahl himself was arguing hardware. It was never about writing better software - that’s the lesson we’ve clawed out of it, after generations of reinforcing harmful biases against parallelism.

    Telling people a billion cores won’t solve their problem is bad, actually.

    Human beings by default think going faster means making each step faster. How you explain that’s wrong is so much more important than explaining that it’s wrong. This approach inevitably leads to saying ‘see, parallelism is a bottleneck.’ If all they hear is that another ten slow cores won’t help but one faster core would - they’re lost.

    That’s how we got needless decades of doggedly linear hardware and software. Operating systems that struggled to count to two whole cores. Games that monopolized one core, did audio on another, and left your other six untouched. We still lionize cycle-juggling maniacs like John Carmack and every Atari programmer. The trap people fall into is seeing a modern GPU and wondering how they can sort their flat-shaded triangles sooner.

    What you need to teach them, what they need to learn, is that the purpose of having a billion cores isn’t to do one thing faster, it’s to do everything at once. Talking about the linear speed of the whole program is the whole problem.

    • Spedwell@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 years ago

      Amdahl’s isn’t the only scaling law in the books.

      Gustafson’s scaling law looks at how the hypothetical maximum work a computer could perform scales with parallelism—idea being for certain tasks like simulations (or, to your point, even consumer devices to some extent) which can scale to fully utilize, this is a real improvement.

      Amdahl’s takes a fixed program, considers what portion is parallelizable, and tells you the speed up from additional parallelism in your hardware.

      One tells you how much a processor might do, the only tells you how fast a program might run. Neither is wrong, but both are incomplete picture of the colloquial “performance” of a modern device.

      Amdahl’s is the one you find emphasized by a Comp Arch 101 course, because it corrects the intuitive error of assuming you can double the cores and get half the runtime. I only encountered Gustafson’s law in a high performance architecture course, and it really only holds for certain types of workloads.