Well, Typst is explicitly a no-go for anyone who has to submit a manuscript, until it they get a damn HTML representation, so Pandoc can get it to LaTeX. There’s practically nowhere I could use Typst except my own notes, and I’ve tried!
Well, Typst is explicitly a no-go for anyone who has to submit a manuscript, until it they get a damn HTML representation, so Pandoc can get it to LaTeX. There’s practically nowhere I could use Typst except my own notes, and I’ve tried!
Well, I doubt they’ll release one for my clippers since they’re discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.
They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn’t seem impossible, and I’ve been waning a deeper comb than they sell.
Yup. Even for technical writing, markdown with embedded LaTeX is great in most cases, thanks largely to Pandoc and its ability to convert the markdown into pure LaTeX. There are even manuscript-focused Markdown editors, like Zettlr.
There are currently 252 Catholic cardinals, but only 135 are eligible to cast ballots as those over the age of 80 can take part in debate but cannot vote.
You’re telling me the Catholic church has more term limits than the US Supreme Court?
Will do! I didn’t make this clear, I did think labplot was a great software for folks who don’t already have the skillset to make plots directly in python – which is the majority of people, and probably the target audience.
Keep up the good work!
Mi was trying out labplot yesterday, and as far as I can tell it can only really plot, not do any sort of transformation or data analysis. The plotting UI itself is pretty nice and the plots look good, but for most of my use cases its worth it to just spin up a Jupyter notebook and work with MatPlotLib directly.
If it could become a general-purpose UI for matplotlib, thatd be fantastic, but its pretty limited in actual usability for me at the moment.
Maybe the graph mode of logseq?
Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!
“Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.
Agreed! I’m just not sure TOPS is the right metric for a CPU, due to how different the CPU data pipeline is than a GPU. Bubbly/clear instruction streams are one thing, but the majority type of instruction in a calculation also effects how many instructions can be run on each clock cycle pretty significantly, whereas in matrix-optimized silicon its a lot more fair to generalize over a bulk workload.
Generally, I think its fundamentally challenging to generate a generally applicable single number to represent CPU performance across different workloads.
I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).
I just came across the lines in the OpenSuse 42 .bashrc in to connect to palm pilots today…what a flashback.
Same question, on vanilla android.
They could have at least renamed it to Radeon Operational Compute method or something…
Yeah heliboard is the only one I’ve found that is actually usable on a day to day. Just wish the autocorrect was better, other than that no complaints.
Yeah, I think the battery thing OP pointed out makes more sense than the power argument. The Z1 extreme used in other handhelds is based on the 8840HS iirc, anf its at least one generation newer than the basis for the steam decks somewhat custom silicon.
The Deck processor is 4 Zen 2 CPU cores and 8 RDNA 2 GPU CUs, while the 8840HS is 8 Zen 4 CPU cores plus 12 RDNA 3 graphics CUs. It’s going to be wildly more powerful. The 8745H actually has the same CPU and iGPU configuration as the 8840HS – not even close to steam deck specs.
AntennaPod is better than it has any right to be – on a modern device, it’s super smooth.
Isn’t that going to be ruinously expensive to host an instance for? Video is expensive in terms of storage and bandwidth.
I’d heard about Toyota trying to water down emissions regulations before, but this is orders of magnitude more yikes than I realized.
Sure, but they can’t build Pandoc translation against an experimental format, so no LaTeX anytime soon.