• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.

    We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.

    Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.

    The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.

    (I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)

    As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.

    Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.








  • I bought one of the really sturdy kind (weighs about 40kg / 88lb). Uprights are solid 100mm/4" rounds of pine, none of that hollow cardboard tube nonsense.

    My XL-sized void loves to haul arse into the room and leap right from the ground to the top level (around 1.6m / 5’3" from the floor), which makes the thing rock side to side fairly precariously. He hasn’t knocked it over yet, but some day I’m sure he’ll come in too hot.



  • To expand on @doeknius_gloek’s comment, those categories usually directly correlate to a range of DWPD (endurance) figures. I’m most familiar with buying servers from Dell, but other brands are pretty similar.

    Usually, the split is something like this:

    • Read-intensive (RI): 0.8 - 1.2 DWPD (commonly used for file servers and the likes, where data is relatively static)
    • Mixed-use (MU): 3 - 5 DWPD (normal for databases or cache servers, where data is changing relatively frequently)
    • Write-intensive (WI): ≥10 DPWD (for massive databases, heavily-used write cache devices like ZFS ZIL/SLOG devices, that sort of thing)

    (Consumer SSDs frequently have endurances only in the 0.1 - 0.3 DWPD range for comparison, and I’ve seen as low as 0.05)

    You’ll also find these tiers roughly line up with the SSDs that expose different capacities while having the same amount of flash inside; where a consumer drive would be 512GB, an enterprise RI would be 480GB, and a MU/WI only 400GB. Similarly 1TB/960GB/800GB, 2TB/1.92TB/1.6TB, etc.

    If you only get a TBW figure, just divide by the capacity and the length of the warranty. For instance a 1.92TB 1DWPD with 5y warranty might list 3.5PBW.









  • I’ve always been lambasted for this opinion, but I feel the same way about the charging cable and charger.

    I do not want yet another 1 metre (if they’re even that, most likely 3 foot) USB-C cable that barely reaches from the charger on the floor to the bedside table - and largely precludes actually using the phone while in bed - nor particularly the included charger. So many things need to be plugged in these days that single-output chargers are also basically e-waste.

    Of course because some business genius had the idea that making the USB cable 0.9 instead of 1.8m saved them $0.06 per unit shipped, we all got lumped with those useless cables.

    Now of course there will always be people for whom it’s their first phone (or whatever situation), who do need those accessories. But all that requires is there to be a retail bundle with the now-accessory charger and cable. Preferably that bundle costs the same as the phone with them included does today and you get a token discount for the phone without them, although we all know it would never work that way :(



  • Worse still, a lot of “modern” designs don’t even both including that trivial amount of content in the page, so if you’ve got a bad connection you get a page with some of the style and layout loaded, but nothing actually in it.

    I’m not really sure how we arrived at this point, it seems like use of lazy-loading universally makes things worse, but it’s becoming more and more common.

    I’ve always vaguely assumed it’s just a symptom of people having never tested in anything but their “perfect” local development environment; no low-throughput or high-latency connections, no packet loss, no nothing. When you’re out here in the real world, on a marginal 4G connection - or frankly even just connecting to a server in another country - things get pretty grim.

    Somewhere along the way, it feels like someone just decided that pages often not loading at all was more acceptable than looking at a loading progress bar for even a second or two longer (but being largely guaranteed to have the whole page once you get there).