Depends on how much power is being transmitted to each base station, but it would have to be a colossal satellite to be “we’re all going to die”.
I pointed that out mostly as a limitation on how much power could be transmitted to each base station.
Depends on how much power is being transmitted to each base station, but it would have to be a colossal satellite to be “we’re all going to die”.
I pointed that out mostly as a limitation on how much power could be transmitted to each base station.
Microwave scattering is an absolute nightmare over that kind of distance. Even for much shorter distances, microwaves are only practical to transport over a couple of meters in a waveguide.
If its transmitting to a base station, we can assume it’s in geosynchronous orbit, or about 22,000 miles from the surface. With a fairly large dish on the satellite, you could probably keep the beam fairly tight until it hit the atmosphere, but that last ~100 miles of air would scatter it like no tomorrow. Clouds and humidity are also a huge problem – water is an exceptionally good absorber in most of the MW band.
I saw numbers reported for the transmission efficiency somewhere (will update this if I can find it again), and they were sub-30%. The other 70% is either boiling clouds on its way down, or missing the reviever on the ground and gently cooking the surrounding area.


Oh damn! Hey, if I was convinced that was a DSLR, then no need to rush getting a camera 😂


You seem like a really fun gi! Nice depth of focus & clarity, what’s your setup?
Sure, but they can’t build Pandoc translation against an experimental format, so no LaTeX anytime soon.
Well, Typst is explicitly a no-go for anyone who has to submit a manuscript, until it they get a damn HTML representation, so Pandoc can get it to LaTeX. There’s practically nowhere I could use Typst except my own notes, and I’ve tried!


Well, I doubt they’ll release one for my clippers since they’re discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.



They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn’t seem impossible, and I’ve been waning a deeper comb than they sell.


Yup. Even for technical writing, markdown with embedded LaTeX is great in most cases, thanks largely to Pandoc and its ability to convert the markdown into pure LaTeX. There are even manuscript-focused Markdown editors, like Zettlr.


There are currently 252 Catholic cardinals, but only 135 are eligible to cast ballots as those over the age of 80 can take part in debate but cannot vote.
You’re telling me the Catholic church has more term limits than the US Supreme Court?
Will do! I didn’t make this clear, I did think labplot was a great software for folks who don’t already have the skillset to make plots directly in python – which is the majority of people, and probably the target audience.
Keep up the good work!
Mi was trying out labplot yesterday, and as far as I can tell it can only really plot, not do any sort of transformation or data analysis. The plotting UI itself is pretty nice and the plots look good, but for most of my use cases its worth it to just spin up a Jupyter notebook and work with MatPlotLib directly.
If it could become a general-purpose UI for matplotlib, thatd be fantastic, but its pretty limited in actual usability for me at the moment.


Maybe the graph mode of logseq?


Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!
“Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.


Agreed! I’m just not sure TOPS is the right metric for a CPU, due to how different the CPU data pipeline is than a GPU. Bubbly/clear instruction streams are one thing, but the majority type of instruction in a calculation also effects how many instructions can be run on each clock cycle pretty significantly, whereas in matrix-optimized silicon its a lot more fair to generalize over a bulk workload.
Generally, I think its fundamentally challenging to generate a generally applicable single number to represent CPU performance across different workloads.


I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).


I just came across the lines in the OpenSuse 42 .bashrc in to connect to palm pilots today…what a flashback.


Same question, on vanilla android.


They could have at least renamed it to Radeon Operational Compute method or something…
Okular is the way to go for anything that’s typed, it has a lot more capabilities than Evince. For handwriting, I’ve used Inkscape, and Libreoffice Draw. They’re roughly similar in capabilities.