• 4 Posts
  • 412 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2024

help-circle


  • Of course. You need about 1 hair per 2x2 pixels on a 1080p screen and 4x4 on a 4K screen. That totals about 10,000 hairs per icon in the simulation, which can be precomputed into animations. Third-party icons will be 2D (or 2.5D if the FG/BG layer of the icon is handled separately, doubling the animation data). Now it’s “just” a matter of drawing 10,000-20,000 lines with precomputed shading and textures from the icon’s 100x100 bitmap render.

    Also, the GPU is only used by apps while they’re in the foreground, so the launcher might be able to use all of its power. And it could cache animations for existing icons (who cares if the system uses 32 GB of storage? Buy the higher option, peasant!)












  • Pixel purists say pixels have to be in a square or rectangular grid. Stitching is a good analog example. Yet others think that 2-subpixel “pixels” (RG and BG, alternating in a checkerboard pattern), as seen on some OLED screens, should be counted as half-pixels, like on Bayer-filter cameras (one RGGB period of the repeating pattern is one full element, not two).

    Anyway, there are digital systems with other layouts:
    Geascript-38 "Parklight-System"
    https://www.wikidata.org/wiki/Q137757955
    Early pocket color LCD TVs, cameras and camcorders would use hexagonal grids similar to shadow mask CRTs’ phosphor dots.

    By the way, neither color CRT phosphor dots nor stripes are pixels because they’re not individually addressable. In fact, depending on the beam’s position, a single phosphor dot can represent a gradient, and on B/W CRTs the whole screen is a single phosphor-covered surface.





  • ChaoticNeutralCzech@feddit.orgtoProgrammer Humor@programming.devLavalamp too hot
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Nah, too cold. It stopped moving and the computer can’t generate any more random numbers to pick from the LLM’s weighted suggestions. Similarly, some LLMs have a setting called “heat”: too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.