Mixing 512e and "native" 4k in a single pool?

My apologies as I’m sure it’s been asked 1000 times already. I’m seeing conflicting advice regarding this:

I have an existing ashift=12 pool where every drive is 512e. One drive is failing. I believe the [inbound] replacement drive is native 4k. Will I have issues with this? Safe to ignore?

Second question on a different topic: Is there a standard procedure for undersizing the new disk’s ZFS partition by a few %? I’m hoping to avoid potential trouble in the future if/when I replace this new drive and its new replacement is a few sectors smaller because reasons…

Here is a post I replied to related to 512e vs 4K compatibiltiy:

In summary, I didn’t experience any issues with the mix.

1 Like

Someone on that thread commented they can’t be mixed on the same vdev. Several others said there’s no issue. It’s like a metaphor for how my research is going…

Then you have to decide who to trust and who’s telling bullsh!t…
Hint: ashift=12 means that all drives will receive requests in multiples of 4k blocks anyway.

2 Likes

This is a very good video about the topic https://m.youtube.com/watch?v=_-QAnKtIbGc&pp=ygUKWmZzIGFzaGlmdA%3D%3D

This is tricky. Any replacement disk MUST be AT LEAST the same amount of sectors as the disk it is replacing.

Now if you are worried that the last slightly smaller disk is being replaced, with one slightly larger, then you want this pool property set to “off”;
autoexpand
This will prevent any growth of the vDev or pool when using a replacement disk of slightly, (or much), larger size. This will preserve the ability to replace the larger disk with a smaller, but still suitable, disk.

For example, a pool made up of 4 x 4TB, 2-way Mirrors. You have to replace 1 disk with a 6TB because that is all you have handy. Later, the second 4TB disk of that same Mirror fails, and you have to use another 6TB, (or larger), disk. However, you don’t want to expand the pool since the larger disks are temporary. You are just awaiting vendor warranty replacements. Thus, that autoexpand property exists.

Makes sense?
Answers your question?

Or is your question about new usage?
For a new vDev or Pool?

1 Like

Agreed.

For whatever reason autoexpand is already disabled – I probably did it years ago and forgot about it.

So here’s how my current drives appear:

Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F316F9AB-A821-594D-AA4B-881567477416

Device          Start        End    Sectors  Size Type
/dev/sdb1        2048 7814019071 7814017024  3.6T Solaris /usr & Apple ZFS
/dev/sdb9  7814019072 7814035455      16384    8M Solaris reserved 1

After one is replaced with a larger drive (and assuming autoexpand remains OFF), will the new drive also show a 3.6T partition of type Solaris plus a few TB of unpartitioned space?

Or will the partition fill the new drive and ZFS simply won’t use all of it?

I’m curious about what would happen if the new [larger] drive were manually “padded” with a placeholder partition at the end THEN autoexpand were enabled. The end goal is to avoid the horror stories I’ve read where people replace a disk and hit a brick wall when the new drive is just a few sectors short for some reason (perhaps firmware revisions).

In other words denying ZFS a couple thousand sectors today to provide wiggle room in the future for accommodation of drives that could be [physically] a few sectors short vs the drive they’re replacing. Am I making sense?

I personally don’t mind bumping every replacement drive up to the next size category and wasting some space for a while – I’m no stranger to “short-stroking” drives in the SAN world – but I’m more worried about warranty replacements where I have no control.

There is an option in SATA disks that allows them to appear as smaller than they really are. It’s called Host Protected Area and can be changed with Linux’ hdparm program. Naturally this should be done on a disk that is both not in use and does not have valid data.

SAS disks also likely have such an option, as I KNOW the old Sun Microsystems used to shrink all it’s customer disks slightly. That was to make sure all the disk vendors could be used, since they had not standardized on specific sizes.

1 Like

/dev/sdf is my new drive. grumblegrumbleConsumer desktop drives

Edit: I did the replacement manually… Used fdisk to create a 5TB Solaris /usr & Apple ZFS type 157 starting at sector 2048, left a 65,536 sector gap, then filled the rest of the drive with an unformatted ordinary Linux partition.

I see my original disks have a 16,384 sector Solaris reserved partition which I presume is for some kind of metadata? I left the gap in case TrueNAS needs to create one on the new disk. Then zpool replace <poolname> /dev/disk/by-partuuid/<olduuid> /dev/disk/by-partuuid/<newuuid> and now it’s resilvering.

It’s interesting to watch this process with iostat. The specs suggest the new disk is half-again faster than its mirror and it’s plainly obvious with iostat showing the old drive pretty much pegged to 100 %util and the new drive loafing along at 50-60%.

Edit2: The new drive is 512e/4k contrary to the specs I’ve read. Okay…