• 0 Posts
  • 77 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • The Walkman and other tape players were so much superior to CD players for portability and convenience. Batteries lasted a lot longer for portable tape players than for CD players. Tapes could be remixed easily so you could bring a specific playlist (or 2 or 3) with you. Tapes were much more resilient than CDs. The superior audio quality of CDs didn’t matter as much when you were using 1980’s era headphones. Or, even if you were using a boombox, the spinning of a disc was still susceptible to bumps or movement causing skips, and the higher speed motor and more complex audio processing drained batteries much faster. And back then, rechargeable batteries weren’t really a thing, so people were just burning through regular single use alkaline batteries.

    It wasn’t until the 90’s that decent skip protection, a few generations of miniaturization and improved battery life, and improved headphones made portable CDs competitive with portable tapes.

    At the same time, cars started to get CD players, but a typical person doesn’t buy a new car every year, so it took a few years for the overall number of cars to start having a decent number of CD players.


  • They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.

    Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.

    Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.

    So what do you mean by 48% accurate? That’s not enough information to do anything with.



  • Yeah, you’re describing an algorithm that incorporates data about the user’s previous likes. I’m saying that any decent user experience will include prioritization and weight of different posts, on a user by user basis, so the provider has no choice but to put together a ranking/recommendation algorithm that does more than simply sorts all available elements in chronological order.





  • Windows is the first thing I can think of that used the word “application” in that way, I think even back before Windows could be considered an OS (and had a dependency on MS-DOS). Back then, the Windows API referred to the Application Programming Interface.

    Here’s a Windows 3.1 programming guide from 1992 that freely refers to programs as applications:

    Common dialog boxes make it easier for you to develop applications for the Microsoft Windows operating system. A common dialog box is a dialog box that an application displays by calling a single function rather than by creating a dialog box procedure and a resource file containing a dialog box template.



  • Some people actively desire this kind of algorithm because they find it easier to find content they like this way.

    Raw chronological order tends to overweight the frequent posters. If you follow someone who posts 10 times a day, and 99 people who post once a week, your feed will be dominated by 1% of the users representing 40% of the posts you see.

    One simple algorithm that is almost always better for user experiences is to retrieve the most recent X posts from each of the followed accounts and then sort that by chronological order. Once you’re doing that, though, you’re probably thinking about ways to optimize the experience in other ways. What should the value of X be? Do you want to hide posts the user has already seen, unless there’s been a lot of comment/followup activity? Do you want to prioritize posts in which the user was specifically tagged in a comment? Or the post itself? If so, how much?

    It’s a non-trivial problem that would require thoughtful design, even for a zero advertising, zero profit motive service.


  • Was that in 2000? My own vague memory was that Linux started picking up some steam in the early 2000’s and then branched out to a new audience shortly after Firefox and Ubuntu hit the scene around 2004, and actually saw some adoption when Windows XP’s poor security and Windows Vista’s poor hardware support started breaking things.

    So depending on the year, you could both be right.




  • Therefore, I think they’d get out a microscope and oscilloscope and start trying to reverse-engineer it. Probably speed up the development of computer technology quite a bit, by giving them clues on what direction to go.

    Knowing what something is doesn’t necessarily teach people how it was made. No matter how much you examine a sheet of printed paper, someone with no conception of a laser printer would not be able to derive that much information about how something could have produced such precise, sharp text on a page. They’d be stuck thinking about movable metal type dipped in ink, not lasers burning powdered toner onto a page.

    If you took a modern finFET chip from, say, the TSMC 5nm process nodes, and gave it to electrical engineers of 1995, they’d be really impressed with the physical three dimensional structure of the transistors. They could probably envision how computers make it possible to design those chips. But they’d had no conception of how to make EUV at wavelengths necessary to make the photolithography possible at those sizes. No amount of the examination of the chip itself will reveal the secrets of how it was made: very bright lasers pointed at an impossibly precise stream of liquid tin droplets against highly polished mirrors that focus that EUV radiation against the silicon and masks that make the 2-dimensional planar pattern, then advanced techniques for lining up 2-dimensional features into a three dimensional stack.

    It’s kinda like how we don’t actually know how Roman concrete or Damascus steel was made. We can actually make better concrete and steel today, but we haven’t been able to reverse engineer how they made those materials in ancient times.


  • Do you have a source for AMD chips being especially energy efficient?

    I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC’s N4P node, which doesn’t have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can’t get on a TSMC node even within a year or two of Apple.

    Still, I haven’t seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren’t necessarily supported on all hardware or OSes, and it’s hard to get a real comparison.

    SoCs are inherently more energy efficient

    I agree. But that’s a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).

    And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That’s why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.

    Ultimately, I’d think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.


  • it’s kinda undeniable that this is where the market is going. It is far more energy efficient than an Intel or AMD x86 CPU and holds up just fine.

    Is that actually true, when comparing node for node?

    In the mobile and tablet space Apple’s A series chips have always been a generation ahead of Qualcomm’s Snapdragon chips in terms of performance per watt. Meanwhile, Samsung’s Exynos has always been behind even more. That’s obviously not an instruction set issue, since all 3 lines are on ARM.

    Much of Apple’s advantage has been a willingness to pay for early runs on each new TSMC node, and a willingness to dedicate a lot of square millimeters of silicon to their gigantic chips.

    But when comparing node for node, last I checked AMD’s lower power chips designed for laptop TDPs, have similar performance and power compared to the Apple chips on that same TSMC node.



  • Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.

    It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.