

Very good. My only criticism is that some of the horns at the end didn’t seem up hold up.


Very good. My only criticism is that some of the horns at the end didn’t seem up hold up.
Ok my 20 and your 20 are not the same.
I was saying the large numbers didn’t make sense if you don’t have a large fleet of drives. Say you have ten servers, each with ten drives, and the MTBF is 100 million hours (yay, easy math!). That means that half your drives will have failed after 100k hours, or 11 years of use.
Some of the sites I have been looking at are saying that this number will increase significantly because 8 hours of daily use would give you about 33 years of use.
I think I like the annualized failure rate better, but I don’t think either really tell a great picture.
https://www.seagate.com/support/kb/hard-disk-drive-reliability-and-mtbf-afr-174791en/
https://ssdcentral.net/hddfail/
I would rather if the annualized rate were recalculated annually.
Regarding the controllers, that has been nagging at me this whole conversation. Most SATA peripheral cards do not have heat sinks, but most SAS cards do. The SAS cards at least have a more rugged appearance.


Yeah, I started with three 1 TB drives in a ZFS1 configuration and when I wanted to expand I had to make a second 3-drive ZFS1 and add it instead of just adding drives to the configuration and resizing the array.
Besides that, I have had no issues.
I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives
Yeah. I was tempering that statement with the fact that I was getting computers for repair, often with bad drives, that had 2 years of use. Now that I really think about it, we were seeing them up to about 5 years. I recall that we were discussing whether to proactively replace the drives with that much time on there. At the time I wanted to ship them back out, and others were saying that 5 years was end of life. Our job was just to get them running again vs. performing full repairs.
I did not mean an average timeline of 20 years
Then I was not sure what you meant by this:
I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.
there are plenty of enterprise SATA drives
…
that’s workstation drives. Obviously if your work buys 2 TB wd blue drives they won’t become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.
Those weren’t really on my radar, TBH. I took a look at the Ultrastar spec sheet and have to concede that the drive interface itself doesn’t seem to affect the lifecycle of the drive itself. I do have to say that the spec sheet does say at the bottom: “MTBF and AFR specifications are based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions for this drive model,” which is what I was guessing before for those million-hour numbers.
All in all, I am at this point only trying to track down and relay what I’m seeing about SAS vs SATA. From what I can tell, they are mostly the same, but SAS has more features (higher transfer rate, hot-swap capabilities, etc, etc,) HP says that SAS is more reliable, but I don’t see anything on that other than the features I just mentioned. Lenovo seems to agree with that take, saying that the reliability between SAS and SATA is comparable,
Hey, I’m not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I’m a repair depot I typically didn’t see drives that live much longer than 17k hours (just under 2 years). That didn’t mean that they always fall at that age, only that systems that came through had about that much time on them max.
Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.
In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.
My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can’t imagine that they are using anything but SAS at this point in their lifecycle.
It’s the future we should all aspire to.


Stupid BAM Broadcom BAM Legacy BAM Wireless BAM Drivers BAM


I just read that recently. Let me see if I can run that source back down.
Edit: All in one CompTIA server plus certification exam guide second edition exam SK0-005 McGraw-Hill Daniel LaChance 2021 Page 138. In the table there it says that SATA is not designing for constant use.
Edit 2:
https://www.hp.com/us-en/shop/tech-takes/sas-vs-sata
Reliability:
SAS: Designed for 24/7 operation with higher >mean time between failures (MTBF), often 1.6 million hours or more SATA: Suitable for regular use but not as robust as SAS for constant, heavy workloads, with MTBF typically around 1.2 million hour
They are saying that SAS is a better option with a longer MTBF, but I don’t expect my drives to last 5 years, much less 136.
My own two cents here is that you probably don’t want to use SATA ZFS JBOD in an enterprise environment, but that’s more based on enterprise lifecycle management than utility.



Here is my combination lab and workbench. I have been busy trying to buy/sell/trade computers that I have become significantly behind on cleaning as I go. I also just got the network rack:

I haven’t had time between work, hustling, and home maintenance to finish getting the cabling managed or the NAS:

The goal is to get the NAS in the rack, UPS to the items in the rack, the 3D printer under the bench, and the monitors on the wall and off the bench. Then I’ll start in on plastic organizers for the bits and parts that clutter my bench.


I only ask because I work in a HP environment and run off a G3 sff myself.


Is that an EliteDesk 800 G3, G4, or G5?


I have been toying with the idea of using USB storage, but my concern is that the controllers are not meant to be used that heavily. Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don’t want to push it.
You will get in that fucking robot, Shinji.


I hate plastic wrap so much. Often it comes down to wrapping the item and then just ripping it because the cutter is gone.
The other items I will use the box in a pinch with no cutter.
I knew a flat Earth believer about ten years ago. I’d estimate hundreds at least.


Nothing works the same any more. This is bizarro land and we now have arrest warrants for civil proceedings.


You are not responsible for your feelings, only your actions.


https://feddit.org/comment/12114813
That comment says it better than I can.


I revived my second WAP today by soldering on a serial header and reloading the firmware. Sounds badass, but I broke it myself and then did a crap job trying to improvise and revive it the first time. I had to buy the correct tools before I could try again.
On the other hand, my SMB shares were mysteriously down this morning. Easy fix, but weird.