• 0 Posts
  • 100 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle

  • It’s not feasible for a mass market consumer product like Starlink.

    Why not? That’s a service designed to serve millions of simultaneous users from nearly 10,000 satellites. These systems have to be designed to be at least somewhat resistant to unintentional interference, which means it is usually quite resistant to intentional jamming.

    Any modern RF protocol is going to use multiple frequencies, timing slots, and physical locations in three dimensional space.

    And so the reports out of Iran is that Starlink service is degraded in places but not fully blocked. It’s a cat and mouse game out there.


  • I’d think that there are practical limits to jamming. After all, jamming doesn’t just make radio impossible, it just makes the transmitter and receiver need to get closer together (so that their signal strength in that shorter distance is strong enough to overcome the jamming from further away). Most receivers filter out the frequencies they’re not looking for, so any jammer will need to actually be hitting that receiver with that specific frequency. And many modern antenna arrays rely on beamforming techniques less susceptible to unintentional interference or intentional jamming that is coming from a different direction than where it’s looking. Even less modern antennas can be heavily directional based on the physical design.

    If you’re trying to jam a city block, with a 100m radius, of any and all frequencies that radios use, that’s gonna take some serious power. Which will require cooling equipment if you want to keep it on continuously.

    If you’re trying to jam an entire city, though, that just might not be practical to hit literally every frequency that a satellite might be using.

    I don’t know enough about the actual power and equipment requirements, but it seems like blocking satellite communications between satellites you don’t control and transceivers scattered throughout a large territory is more difficult than you’re making it sound.






  • Specifically, desktop RAM is slabs of silicon, placed into little packages, soldered onto circuit boards in DIMM form or similar, to be plugged into a motherboard slot for RAM.

    The AI demand is for the silicon itself, using advanced packaging techniques to put them on the same package as the complex GPUs with very high bandwidth. So these same pieces of silicon are not even being put into DIMMs, so that if they fall out of use they’ll be pretty much intertwined with chips in form factors that a consumer can’t easily make use of.

    There’s not really an easy way to bring that memory back into the consumer market, even after the AI bubble bursts.


  • Still a pretty limited palette, everyone wearing the same color shirts.

    PNG tends to fail hard with textures. For example, my preferred theme in my chess app, which has some wood grain textures, generates huge screenshot file sizes (2MB), whereas the default might be less than 10% as large. Similarly, when I screenshot this image the file size jumps to 2MB for a 0.8 megapixel image.

    Rendered textured scenes could easily overload the PNG compression algorithm to where they’re huge, and if Discord is historically associated with gaming, one can imagine certain video game screenshots blasting past that 40mb limit.


  • I think HEIC plays friendly for how they store live photos: a container that has both a still image and a video of the surrounding time context. HEIC for the still photo and HEVC for the video probably optimizes the hardware acceleration for fast, low power processing of both parts of the data, and allows for a higher quality extraction of an alternative still photo from a different part of the video.

    And maybe they want to have more third party support in place before they set JXL as a default. All the power and space savings in the world on capture might not mean as much if the phone has to do the work of exporting a JPEG or HEIC for each time that file interfaces with an app or the browser or whatever.



  • Google didn’t kill JPEG XL. It might have set browser support back some, but there’s still a place for JPEG XL to take over.

    All the modern video-derived formats (webp, heif/heic, avif) tend to be optimized for screen resolutions. But for print photography (including just plain old regular photography that wants to keep the option open of maybe printing some of the images eventually), the higher resolutions and higher quality stretches the limits of where those codecs actually perform well (in terms of file sizes, perceived quality, computational power of coding or decoding).

    JPEG XL knocks the other modern images out of the water at those print resolutions and color spaces and quality. It’s not just for photography, either: medical imaging, archiving, printing, etc., all use much higher resolutions that what is supported on any screen.

    And perhaps most importantly for future support, the iPhone now supports taking images in JPEG XL. If that becomes a dominant format for photographic workflows, to replace stuff like DNG and other raw formats, browser support won’t hold back the format’s adoption.


  • And if you already have compression artifacts, what use is lossless?

    To further reduce file size without further reducing quality.

    There are probably billions of jpeg files out there in the world already encoded in lossy JPEG, with no corresponding higher quality version actually available (e.g., the camera that captures the image and immediately saves it as JPEG). We shouldn’t simply accept that those file sizes are going to forever be stuck, and can think through codecs that further compress the file size losslessly from there.


  • It was the Joint Picture Experts Group that invented it, so Google had no ownership over it, unlike WebP.

    No, JPEG called for submission of proposals to define the new standard, and Google submitted its own PIK format, which provided much of the basis for what would become the JXL standard (the other primary contribution being Cloudinary’s FUIF).

    Ultimately, I think most of the discussion around browser support thinks too small. Image formats are used for web display, sure, but they’re also used for so many other things. Digital imaging is used in medicine (where TIFF dominates), print, photography, video, etc.

    I’m excited about JPEG XL as a replacement for TIFF and raw photography sensor data, including for printing and medical imaging. WebP, AVIF, HEIF, etc. really are only aiming for replacing web distributed images on a screen.


  • If you screenshot computer/phone interfaces (text, buttons, lots of flat colors with adjacent pixels the exact same color), the default PNG algorithm does a great job of keeping the file size small. If you screenshot a photograph, though, the PNG algorithm makes the file size huge, because it’s just really poorly optimized for re-encoding images that are already JPG.


  • AI drives 48% increase in Google emissions

    That’s not even supported by the underlying study.

    Google’s emissions went up 48% between 2019 and 2023, but a lot of things changed in 2020 generally, especially in video chat and cloud collaboration, dramatically expanding demand for data centers for storage and processing. Even without AI, we could have expected data center electricity use to go up dramatically between 2019 and 2023.


  • That’s kinda always been how technology changes jobs, though, by slowly making the job one of supervising the technology. I’m no longer carving a piece of wood myself, but I’m running the CNC machine by making sure it’s doing things properly and has everything it needs to work properly. I’m not physically stabbing the needle through the fabric every time, myself, but I am guiding the sewing machine path on that fabric. I’m not feeding fuel into the oven to maintain a particular temperature, but I am relying on the thermocouple to turn the heating element on and off to maintain the assigned equilibrium that I’ll use to bake food.

    Many jobs are best done as a team effort between human and machine. Offloading the tedious tasks to the machine so that you can focus on the bigger picture is basically what technology is for. And as technology changes, we need to always be able to recalibrate which tasks are the tedious ones that machines do better, and which are the higher level decisions best left to humans.




  • It’s like the relationship between mathematics and accounting. Sure, almost everything accountants do involve math in some way, but it’s relatively simple math that is a tiny subset of what all of mathematics is about, and the actual study of math doesn’t really touch on the principles of accounting.

    Computer science is a theoretical discipline that can be studied without computers. It’s about complexity theory and algorithms and data structures and the mathematical/logical foundations of computing. Actual practical programming work doesn’t really touch on that, although many people are aware of those concepts and might keep them in the back of their mind while coding.