• 2 Posts
  • 124 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle

  • Microwave scattering is an absolute nightmare over that kind of distance. Even for much shorter distances, microwaves are only practical to transport over a couple of meters in a waveguide.

    If its transmitting to a base station, we can assume it’s in geosynchronous orbit, or about 22,000 miles from the surface. With a fairly large dish on the satellite, you could probably keep the beam fairly tight until it hit the atmosphere, but that last ~100 miles of air would scatter it like no tomorrow. Clouds and humidity are also a huge problem – water is an exceptionally good absorber in most of the MW band.

    I saw numbers reported for the transmission efficiency somewhere (will update this if I can find it again), and they were sub-30%. The other 70% is either boiling clouds on its way down, or missing the reviever on the ground and gently cooking the surrounding area.











  • Mi was trying out labplot yesterday, and as far as I can tell it can only really plot, not do any sort of transformation or data analysis. The plotting UI itself is pretty nice and the plots look good, but for most of my use cases its worth it to just spin up a Jupyter notebook and work with MatPlotLib directly.

    If it could become a general-purpose UI for matplotlib, thatd be fantastic, but its pretty limited in actual usability for me at the moment.



  • Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!

    “Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.

    I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.


  • Agreed! I’m just not sure TOPS is the right metric for a CPU, due to how different the CPU data pipeline is than a GPU. Bubbly/clear instruction streams are one thing, but the majority type of instruction in a calculation also effects how many instructions can be run on each clock cycle pretty significantly, whereas in matrix-optimized silicon its a lot more fair to generalize over a bulk workload.

    Generally, I think its fundamentally challenging to generate a generally applicable single number to represent CPU performance across different workloads.