• 3 Posts
  • 331 Comments
Joined 3 years ago
cake
Cake day: July 20th, 2023

help-circle
  • Hey, I’m not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I’m a repair depot I typically didn’t see drives that live much longer than 17k hours (just under 2 years). That didn’t mean that they always fall at that age, only that systems that came through had about that much time on them max.

    Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.

    In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.

    My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can’t imagine that they are using anything but SAS at this point in their lifecycle.




  • I just read that recently. Let me see if I can run that source back down.

    Edit: All in one CompTIA server plus certification exam guide second edition exam SK0-005 McGraw-Hill Daniel LaChance 2021 Page 138. In the table there it says that SATA is not designing for constant use.

    Edit 2:

    https://www.hp.com/us-en/shop/tech-takes/sas-vs-sata

    Reliability:

    SAS: Designed for 24/7 operation with higher >mean time between failures (MTBF), often 1.6 million hours or more
    SATA: Suitable for regular use but not as robust as SAS for constant, heavy workloads, with MTBF typically around 1.2 million hour
    

    They are saying that SAS is a better option with a longer MTBF, but I don’t expect my drives to last 5 years, much less 136.

    My own two cents here is that you probably don’t want to use SATA ZFS JBOD in an enterprise environment, but that’s more based on enterprise lifecycle management than utility.