ZFS and SMR drives

I know for a long time it was advised to never use SMR drives with ZFS. However, we now have different types of SMR drives (mine being DM-SMR Seagate drives) and I am wondering if ZFS has adapted to these drives yet or if there are ZFS profiles/configurations that will work with them better these days?

Thank you

Stuart

No.

And no.

6 Likes

Yes.

1 Like

Some time ago i’m “planned” on this article. Not so new but still actual

1 Like

This is not really specific to ZFS. Generally any RAID system will tell you to not use SMR drives. There’s a reason these things are not used by enterprise systems.

1 Like

Cheating a bit by cribbing a response of mine from the old forums:

SMR mixed with a copy-on-write (COW) filesystem like ZFS is a bad combination regardless of vdev geometry, until the filesystem is rewritten to take advantage of it.

It isn’t “writes” that causes the SMR stalls, it’s “rewrites” or “overwrites.” New writes into empty areas, especially done in larger recordsizes, can be handled because it doesn’t need to re-shuffle any existing data on those sectors. So if your intention is to fill a pool with multi-GB “Linux ISOs” or other archival workloads, then your drives should be able to happily lay those onto a series of overlapping 256MB SMR zones. Even if you delete those large chunks of files, you should hopefully be freeing entire zones at a time, and only have to reshingle the small amount of zones where “File A” ends and “File B” begins. Give enough time between the deletes and the new data ingest for your drives to fix themselves, and you won’t really notice.

The problem comes when you start introducing small records or overwriting chunks of larger files frequently. Basically, the more often you “poke holes” in the SMR zones, the more frequently they have to reshingle, and that’s what causes the drive to stall out doing internal housekeeping.

Once HA (Host Aware) and HM (Host Managed) SMR drives are commonplace and ZFS has code to support them, ZFS may be able to handle some of that housekeeping internally - controlling the data placement on the platters themselves to say “okay, this is a larger record, I can drop it on an empty SMR zone nice and fast, giving me X ms of free time, I also know the size of the CMR cache area is and how full it is, so let’s do a little housekeeping now to preserve my ability to keep dumping data into an empty SMR zone.”

DM-SMR doesn’t communicate anything back to the filesystem; it’s more of a compatibility layer pretending to be a regular CMR drive. Works okay as a single drive for many filesystems, but ZFS needs to know what’s going on under the hood to use it efficiently.

As of 2024, there’s been some work done with ext4 [1] and f2fs but nothing integrating something like libzbc [2] into OpenZFS yet. I don’t know that there’s sufficient incentive, SMR provided a small increase to density in order to eke out more TB/$ but EAMR (Energy-Assisted Magnetic Recording) technology has packed the bits in tighter as well without the associated downsides of zoned block devices.

[1] https://github.com/Seagate/SMR_FS-EXT4
[2] https://github.com/westerndigitalcorporation/libzbc

1 Like

Don’t use them :grinning:
But to be fair, I used SMR drives for years without a problem.
As long as you don’t write much to them, and they are under 80% usage, it kinda works.

But sooner or later all drives gave me checksum errors. I don’t know if this was because they were shitty Seagate Archive drives or because SMR was too slow to respond.

Just never, ever replace a failed drive with a SMR drive. That will take weeks to resilver.

2 Likes