Some hdd always become unavailable after a reboot, help needed

What’s the power supply? Maybe browning out trying to spin up all the disks.

Is it always the same disks?

How are they all powered? Are you using sata power splitters? Or have you got enough actual data power plugs from your psu?

1 Like

I suggest reading the Proper Power Supply Sizing Guidance | TrueNAS Community.

You are using the PSU’s power cables, aren’t you? Mixing different cables is dangerous ground since most of the times they are different.

Also, I trust your WD REDs are either PLUS or PRO: plain WD REDs are SMR, which is not compatibile with ZFS.

I’d look into the PSU and the wiring. I have a modular PSU and I limit / balance the number of drives per power connection to ensure that every drive gets good power.

Given that it’s the same drives on every boot that fail to come up suggests it might be a wiring issue, i.e. too many drives on a single power connection to the PSU. But it might also be the PSU going to its knees and those two drives are simply the unlucky ones at the end of the power bus.

Either way, this is not a good state of affairs. I hope you can get it fixed.


EVGA G2 Supernova 650W. So it should by my quick and dirty calculations be enough but it doesn’t have thaaaat much overhead. So I’m thinking of hooking up my 850W of the same model and try that for a couple of reboots.

I’ve been a bad troubleshooter and in the beginning I didn’t keep track of which disks failed.

But during the last three troublesome reboots one disk has failed all three times. And one disk has failed one time.

Not sure what you qualify as splitters, but yes. They are however these 1→3 or 1→4 that comes with the psu. (The 650W only came with 1→3 splitters but after contacting evga they said it’s ok to use 1→4 as well, assuming they come from evga).

The psu have three sata outputs and one periferal output, which according to evga is exactly the same power/connection wise as the saya output.

I don’t know the inner workings of my psu but the disks that have failed since I started to keep track of it have been on a 1→3 splitter. In my uneducated mind that should be more reliable than those drives that are on a 1→4 splitter.


My initial batch of drives were all smr but most (not all) have since been replaced. I know it’s far from ideal but I also know that at least back when the wd controversy happened truenas said that while not recommended it shouldn’t be a problem unless you had specific drives/firmwares.

So unless new information have been revealed I doubt that has anything to do with it.

Are the failing drives SMR?

If you’re only using the EVGA sata power cables (three or four plug) that should be fine

The issue is actual SATA power plug is not really rated for powering two drives. So when people use a sata y adapter two run two drive off one plug they can run into issues (including fires)

Sometime it can help to combine the rails (a configuration option) but your PSU doesn’t seem to have multi-rail. Which is a good thing.

I think keeping to the ATX multirail spec went out of fashion.

Anyway, 12 disks is right on the cusp of being “a lot” where balancing power can become an issue.

I’m unsure about the drive that has failed all three times since I started to keep track of them. Could be to be honest.

The drive that has just failed once (again, since I’ve started to keep track) was not a smr drive however.

I’ve decided to rebuild the server when one vdev gets completely upgraded with bigger drives and then go with a ~10 disk setup instead of 12. But I’m still 2-3 disks away from being there. If only everything wasn’t that expensive. Maybe I should just bite the bullet and get it over with.

I’m not sure if I look forward or dread for all the time that will go into looking up which mb/cpu I will need :confused:

Can’t imagine what a physical issue might be, other than cabling.

Is the firmware on the HBA the latest?

Actually, WD itself tells users not to use SMR drives with ZFS. On WD Red NAS Drives - Western Digital Corporate Blog

The increased amount of sustained random writes during ZFS resilvering (similar to a rebuild) causes a lack of idle time for DMSMR drives to execute internal data management tasks, resulting in significantly lower performance reported by users. […] we currently recommend our CMR-based WD Red drives, including WD Red Pro and the forthcoming WD Red Plus.

WD’s DMSMR (all SMR actually) is not compatibile with ZFS.

1 Like

They should never have labelled an SMR drive as a “nas drive”

One of the all time marketing fails that, killing their red brand.


Not a tin foil hatter here but I maintain that the SMR debacle was a deliberate cash grab that the c suite hoped no one would notice.

SMR and CMR drives are physically indistinguishable, other than a different firmware and 20% more capacity enabled via the overlapping SMR sectors. So corporate could save themselves a platter, two heads, etc. per drive yet sell the SMR drive at the same price as CMR versions.

If WD had been open about the changes they were making to the Red line by announcing the transition in advance, etc. then I would have called it a marketing mistake but whatever, it’s their brand. The fact that they did it underhandedly indicates it was a cash grab to polish quarterly results.

Sorry, they don’t get a pass. Their subsequent reaction as blocks and files, serve the home, etc. exposed the scummy behavior was more about preventing successful class action lawsuits than a sudden change in morals.

No one was fired, and the company went on to continue to sell hardware into a channel that their own employees had publicly declared as fundamentally incompatible for CoW/heavy NAS use, and they forked the red line into three separate sub categories where the entry level is SMR and the rest CMR.

1 Like

It might be economically savvy to buy a few bigger drives (ie 2 or 3 18TB ones), move everything there, and resell the old drives.

Maybe we just have different definitions of “not compatible” but a worsen performance is not the same as not being compatible.

Many of my smr drives have been running on my truenas server for 7-8 years without a problem. Yes, resilvering takes a while but other than that it manages all my needs without any problem. The heaviest lifting it does is however torrenting and Plex so I guess the demands are low.

So I’m going to keep using them until they need to be replaced.

In this case, “not compatible” means, “may suddenly stop working”, and this is not a hypothetical situation, it does happen, particular during a scrub/rebuild which is a really really bad time for this to happen, and its due to the SMR drives not being 100% compatible with ZFS.

1 Like

Brand new WD Red SMR drives have a firmware bug, (don’t know if it has been fixed), that caused problems with ZFS. As in FATAL problems and enough for ZFS to declare the WD Red SMR drive as failed.

Now to be fair, a simple drive test of writing to the entire drive before ZFS use will solve that firmware bug.

Basically, WD Red SMR drives return an error, IDNF, (ID Not Found), when reading a sector that has not been already written. Stupid bug, breaking hard drive behavior going back 40 years… ZFS appears to bundle multiple reads into a single drive read request, even to the point of extra, un-needed sectors in-between those bundled reads. With that WD Red SMR bug, ZFS will trigger the IDNF error. Stupid as I said.

If someone’s SMR drive is working fine for them, and they understand the risks of extended re-silvers and potentially longer scrubs, great. Note that longer ZFS scrubs can occur because the SMR drive over time can become extremely fragmented. Thus, potentially requiring excessive head seeks for reads that a CMR drive with sequential data stripes won’t have.

And yes, I have and use a Seagate Archive 8TB SMR drive with ZFS as one disk in my backup rotation. It was one of the few 8TB disks at the time I bought it. It does work, and has gotten slower over time. Eventually I will replace it. But, at present, I am not backup window time constrained, (as in I don’t really care how long my every month backup takes).

1 Like

Back to why some disks can have trouble with server usage;

  • Too short idle time, causing excessively quick head parking
  • Too long of a Time Limited Error Recovery

The first is useful feature for laptops and some desktops. But for server usage, it can cause a delays in access.

For the second, TLER, (other vendors have a different name or acronym for it), involves limiting time to recover a bad sector. For desktop & laptop drives, not limiting recovery time is useful because they don’t likely have any RAID involved. This can take more that 1 minute to either succeed or fail. In the meantime, ZFS will consider the drive failed because the drive is no longer responding to ANY request.

NAS specific drives are expected to be used with RAID, thus have time limited error recovery, perhaps the minimum of 7 seconds. And if the recovery failed, return the error to the host software. For a NAS & ZFS that is good, as ZFS will simply re-create the data from Mirror or parity information, and tell the drive to spare out that failing sector(s).

Some non-NAS or non-server drives can have those parameters adjusted.

SMART output should show if either feature is enabled. I don’t have the details handy.

1 Like

I am doing replication to a Archive 8TB pool in RAIDZ2 replication is really a pain while replicating my iocage jails root due to the large amount of data under the “clientmqeue” folder which contains 100k’s of files.

However, I do not experience any crash or drive disconnection. It just takes it sweet time.

1 Like

WD itself reccomends not using their SMR drives for ZFS… they are known to not be compatibile: search the old forum for examples.

I still think we have different definitions of “not compatible”.

As WD says in your link they can’t recommend smr rives due to worsen performance in certain tasks. But they will still work.

Yes, there are specific firmwares in specific drives that straight out did/does not work with zfs, but I don’t have those drives.

So unless someone can give a reason, other than bad performance and longer scrub/resilver times, I’m going to stick my my drives that have worked fine for almost eight years.

Data loss. As I wrote, look in the old forum.

1 Like

Many small files are a big problem for HDD pools. My rsyncs would crawl along… it got better with a metadata-only, persistent L2ARC.

But then I got smart and made two major changes

  1. the small files were mostly os backups. So I archived them into larger sparsebundle archival files. That eliminated millions of small files.

  2. I implemented a sVDEV, and with the right small file size thresholds, record-sizes, etc it almost doubled the speed of large sequential writes.

Now, a sVDEV is not for everyone, it carries risks, but it’s been simply amazing how fast directory reads go when all metadata is on a SSD.