Have you been bitten by SMR HDDs?

Years after SMR HDDs were stealthy introduced in to NAS HDD lines, we are still seeing users with them. Most don’t seem to know.

To be clear, when Seagate introduced it’s first SMR HDDs in 2013, the Archive line, some people knew about the various issues and still bought & used them. Including me, as Seagate’s Archive 8TB SMR was a reasonable cost 8TB. Back then, there were few 8TB options, all noticeably more expensive.

One person purposefully wanted to use Seagate Archives in a ZFS pool to reduce cost. That user ran across some problems, like the expected long disk replacements. That user even thought the ZFS developers should write new code to support SMR HDDs.

Their are 3 types of SMR HDDs:

  • DM-SMR - Drive Managed, which is what most consumers will have either bought or know about. Examples are new WD Red, Seagate Archive & Barracuda.
  • HA-SMR - Host Aware, meaning the drive exposes shingle information allowing the host to know if a re-write of a track is needed.
  • HM-SMR - Host Managed, where the host deals with the re-writing of shingles.

Here is a more complete explanation:

Here is a recent thread that seems to have used Seagate Barracudas in a ZFS pool:

So, do any of you want to express your love, hate or indifference to SMR hard disks?

1 Like

My Seagate Archive 8TB SMR HDD used for backup purposes, was a good purchase at the time, FOR ME. It does it’s job, even today, more than 8 years later.

Yes, some sectors have gone bad, detected by an initial ZFS scrub. And since it’s a single disk without “copies=2”, they are unrecoverable. But, I have multiple single disk backup hard drives and use RSync. So if an active block in my sources, (desktop, laptop, etc…), is bad on the 8TB Archive disk, both ZFS & RSync will write a new copy of the block / file, which IS NOW GOOD.

That means I may hate upon WD Red SMR disks because of both the firmware IDNF bug and the stealthy replacement of normal CMR disks in the WD Red line. But, I don’t hate the concept if it truly introduces more storage at a lower cost, like the Seagate Archive line did.


A few notes.

I found that the Seagate Archive 8TB disk runs hot because of all the work it has to perform during a backups. Including the initial scrub. Basically a ZFS scrub reads every used block and the actual backup is more or less continuous writes. So I went to great effort to find an external chassis that had active, (aka FAN), cooling.

Next, using a SMR disk for multiple write only use, (backups…), makes the shingles highly fragmented compared to simple linear file layout. This makes both writing the files for the backups slower, and later any restores would also be slower.

Last, ZFS being a copy on write file system, causes a lot of smaller writes even for single large file writes. The metadata data and critical metadata are being constantly updated in the background to maintain file system integrity. This causes even more shingle fragmentation.

Conclusion - Neither ZFS nor backups with updates are the intended use of Seagate Archive disks. But, using ZFS does allow data integrity checks, which is always useful for disks with complex firmware, like DM-SMR HDDs.

2 Likes

I was tempted to use SMR disks for ZFS backup purpose, i had 8/10 small disks laying around… But at the end i realised that if i can’t trust a backup Is like not having one. And then i realised too that a couple of those not Nas disks (WD blu and green) were in reality CMR.
So im using them with ZFS from some months flawlessy, not obv operating 24/7, and the other just as a flat copy of file. A couple of those disks have literally 15/20 years and neither 1 pending sector

@Arwen Nice writeup on the SMR issues people experience and why.

I only tend to use SMR drives when using an external USB Archive drive and I will copy off everything I need to retain to this drive before I destroy my data, such as when I have to restore an earlier image of my main computer because I installed a piece of software that does not remove completely and reverse it’s changes. I actually have done that twice in the past 2 weeks. I’m testing out some trial versions of software before I buy, if I buy. I will buy one of them and I know it will be on sale this week.

That is the only time I will use SMR drives, but not as a drive I would use to read/write to often. Additionally I will delete (format) the data on this USB drive before I start, it seems to make things write a little faster. If there was a specific pattern to write that would be faster, I’m all ears.

So I feel they have a use in the world and are a good value for the capacity and price.

SMR? Those are the self-contained tape drives with the SATA compatibility layer, right? :wink:

4 Likes

Trust me, tape is a whole 'nother level of pain.

Seagate, list of HDD lines and their technology (CMR/SMR): CMR and SMR Hard Drives | Seagate US

2 Likes

from the SMR drives I’ve used over the years, most of the issue seems to be the firmware, as OP suggests there are in theory ‘host managed’ drives but I’ve yet to actually see an example of one (probably just an idea that never got to production, probably because vendors knew just how bad “drive-managed” was).

most of the modern implementations at least expose TRIM and let the host tell the drive what is un-used so the drive can perform garbage collection much more efficiently. the issue however is afaik, Seagate’s SMR disks (mostly 2.5 inch) still don’t and a lot of the previous WD SMR disks didn’t. I just wish that both of them would retroactively ship a firmware update that adds TRIM because it would greatly increase the usability of these drives. but they like money so they won’t, as a consumer you are stuck with flawed firmware.
It should also be mentioned, ZFS is uniquely suited for this as it’s autotrim feature is in real-time and not on a schedule like basically every other filesystem’s implementation.

the non-TRIM disks were truly miserable, having all the fun stalled I/O and slow write speed we know SMR disks for. the newer ones however seem to be much better in this regard, though they are still nothing special. and now the cost-delta between SMR and CMR has stopped being a thing, so most don’t even have a reason to consider SMR.

2 Likes

Does Anyone have experiences with the HA-SMR or HM-SMR? Does it work with TrueNAS Enterprise or is it avoided there too?

As far as I know, HA and HM are not even usable: Special drivers are required.

1 Like

Wife has some in a workstation used mostly for photoshop. The drives work fine until there is a huge load on them when copying thousands of large files off memory cards or accessing libraries during culling that require read/write/delete files then she runs into slow system, system lockup, application crashes and hot drives. Since we are thinking of replacing the system in a year or so, she now works off the fast pool in the main Scale server, then transfers to the slow pool (spinning rust) for storage. Same issue with SMR with the two (secondary/field work) laptops I have with 2.5 drives. Work fine until there is something write intensive and they fall flat on their face so we now use my laptop with m.2 drives for camera tethering and culling while at photo shoots. I got rid of all the SMR drives that I had in a couple old ReadyNas systems. Thought initially the systems were bad until found out about SMR drives. The systems are ancient but are still working with CMR drives today.

2 Likes

I would assume HA AKA Host-Aware are just SMR disks with TRIM support (in other words, the way they should’ve been designed in the first place). which should work in TrueNAS though I (and everyone) would love to hear someone’s experience with this.

Host-Managed might require special support, but I doubt these even exist (yet). this may change with WD’s opti-nand nonsense though.

overall it’s a bit hard to find concrete information on SMR disks as 90% of what is out there is press-release nonsense.

HM-SMR exists, but is sold only to hyperscalers. There’s no need for public-facing documentation on these drives.
I even suspect that HM-SMR is the reason why manufacturers still bother with SMR.

1 Like

patrick from sth made a couple of videos in regard to smr bad for zfs raid

but in particular he posted one on wd red plus.

why does that matter? because wd was the brand most guilty of blatantly scamming people trying to sell wd reds (certain models. i had an OLDER for the wd red series that was not an SMR, but the specs were outdated e.g. low cache) for nas, when they had known SMR should not have been used for a nas especially for zfs raid.

So then they had to come up with wd red PLUS :unamused:

Seagate might have also been guilty of adding smr, but not to their ironwolf, ironwolf pro and exos brand line up. so they weren’t as blatant as what wd did.

there were no saints here, but one was way worse than the other.

Heck wd even tried to sell WDDA (fake smart). Basically if your hdd was old, they would just write it off as unreliable. Even if the hdd passed conventional smart tests. Basically they wanted to trick you to replace your drive even if it was perfectly fine.

So yeah, i went all seagate wolf and exos after wd doing too many bad things :sweat:

well that is unfortunate :cry: maybe some of the users bought branded nas like synology. wouldnt they have like hdd compatibility list? i didn’t check but i would have hoped they would not have added smr drives to the compatibility list :face_with_raised_eyebrow: Then those newbie users who bought those kinds of nas would refer to the list and only use drives listed. If they don’t well that’s on them :grimacing: so even if they missed the videos on why smr is bad, they would at least should refer to the compatibility list.

1 Like

The challenge is that HM-SMR drives require zoned block commands. libzbc exists but you’d need to get your hands on a drive or firmware that supports it, as well as making sure you don’t break any NDAs in the process.

But pointing back to a couple old posts of mine that aren’t NDA-specific material:

Rewriting ZFS to handle HM-SMR would be a massive undertaking, and the improvements in areal density brought on by energy-assisted recording have mitigated the need somewhat.

3 Likes

FTFY
WD sneaked SMR into its NAS line to sell cheaper drives, despite HGST engineers protesting that SMR was not suitable for ZFS, and intially reacted by denying any issue when issues did break out…
SMR can have its uses, but ZFS is definitely NOT one of them, and arguably SMR should not be used in any form of NAS. WD put SMR in a NAS line; Seagate and Toshiba did not. If someone takes an Archive drive that is not designed for NAS use and puts in a ZFS pool, that’s a user issue, not a manufacturer issue.

There are three hard drive manufacturers out there: WD, Seagate and Toshiba. Make that three-and-a-half if you count HGST, whose superior engineering team apparently still operates as in independent entity within WD. (Some drop Toshiba because of its small market share and make it two…)
Seagate had more than its fair share of reliabilty issues some years ago. Between sneaking SMR into the Red line and inventing the technical-specification-as-a-perception (“rpm-class”), WD did its best to prove untrustworthy. But I find it hard to categorically exclude one supplier when there are so few to chosse from.

yeah the situation is awful. only 3 picks…

I used all 3 before

HGST ultrastar (this was before they got sold to wd?)

WD RED (this was before they added smr. but its old model so cache was small)

Seagate (i’ved used ironwolf non pro, and an EXO x12 so far)

And way before all these, i used a samsung. It wasn’t for nas, but it was 24/7 rated. Samsung F3 spinpoint. But they no longer make hdds anymore do they?

basically we r screwed for choice for hdds :cry:

No. It is my understanding that Host Aware SMR exposes the zone layout & usage via SAS or SATA vendor specific commands. This allows a host to choose where to store data.

For example, if a host is saving a huge file, and it knows a zone that is un-used, the data can be written sequentially without re-writing the following tracks. Thus, write at more or less full speed.

The drive would still manage track re-writes as needed, as well as garbage collection. It is that the host software driver and file system would be aware of where to write, and how much was available.

Not sure how all that would work in actual implementation. And certainly not in the context of using ZFS. But, above is my understanding.

I think you meant:

2 Likes

My first TrueNAS build were 8x RAIDZ2 Seagate Archive 8TB SMR drives.
Shucked from external 3.5 HDDs.
I had mostly big files and the pool was only 75% filled.
They worked great for 1,5 years.
After that, 3 disks, one after another experienced checksum errors.
Send them in for warranty and got new ones.
Even the rebuild to SMR drives was ok. Sure it was slow, but it worked.

After the warranty run out, more disks started to give me checksum errors and get replaced for none SMR drives.

That is why my conclusion on SMR drives is:
If you only have huge files and don’t need that much write performance, they are fine.
But since there are no longer noticeably cheaper SMR drives, (are there even still large SMR drives?) the discussion is mood.

It seems like there’s been a fair bit more open-sourcing of information lately on zoned storage. Some good reading here; I saw how deep the rabbit-hole went and have rappelled back up to avoid losing my entire day. :wink:

3 Likes