Is it possible to create a mirror vdev with uneven drive sizes?

I have 2x4tb (A,B) and 1x8tb (C) drives. Is it possible with ZFS to create an 8tb mirror, where one of the 8tb drives is actually two 4tb drives?

To me this makes logical sense; keep half the data from C on A, and the other half on B. If any drive is lost, the other one/two has the data, but I haven’t been able to find any talk about such a setup (though that may be by lack of search skills).

I know you could also run raidz1 with 3x4tb, only using half of C, but I’m still curious about how the setup above would work.

No.

Click this if you hate your data

Yes, it’s technically possible in the command-line, but it would not offer additional benefit, while adding needless risk of losing your data. If the “split” disk fails, it’s the equivalent of losing a drive in two different mirrors, putting both vdevs at a non-redundant state, rather than just one. Such a situation would require you to resilver both vdevs with replacement disks.

1 Like

So what you’re saying is that if one of the 4tb’s is lost and replaced, both 4tb drives would have to be resilvered, and with regular raidz1 you would only need to resilver the new drive?

No. More specifically, if you lose the 8 TiB drive, you need to resilver both (now degraded) mirror vdevs.

Never mind the performance penalty for a disk being part of two different vdevs.

Please don’t do this. It’s not worth it. Unless the data and optimal performance aren’t that important.

You’re better off purchasing an additional 8 TiB and constructing two mirrors, for a total of 12 TiB usable capacity:

two-way mirror of 4 TiB + two-way mirror of 8 TiB

flowchart TD

pool(("Pool\n12 TiB"))
mirror1[["Mirror VDEV 1\n4 TiB"]]
mirror2[["Mirror VDEV 2\n8 TiB"]]
driveA[("Drive\n4 TiB")]
driveB[("Drive\n4 TiB")]
driveC[("Drive\n8 TiB")]
driveD[("Drive\n8 TiB")]

pool --- mirror1
pool --- mirror2
mirror1 --- driveA
mirror1 --- driveB
mirror2 --- driveC
mirror2 --- driveD
  • Losing a drive in either mirror only requires replacing that one drive
  • You’ll gain optimal performance
  • You’ll have 12 TiB usable capacity (rather than 8 TiB in your proposed setup)
1 Like

Oh okay, so in this scenario there are two mirror vdevs, each of 4tb usable storage. I was curious about whether you could have something of a hierarchical vdev or equivalent:

8 TiB Mirror vdev
|- 8 TiB drive
|- 8 TiB Mirror vdev
   |- 4 TiB drive
   |- 4 TiB drive

I’ll probably go for a raidz1 for now, then I’ll get another 8 TiB drive a bit later and use your proposed setup.

You’ll have to destroy the pool and start all over again if you decide to go this route and then later change the topology.

Unrelated, where did you create this lovely looking diagram?

I already have a pool with the two 4 TiB drives, so I’ll have to destroy it regardless, but thanks for the warning and the explanations, they’ve been very helpful!

Are these 4TiB drives mirrored? If you purchase an additional 8TiB as mentioned you can just add another mirrored VDEV (2x8TiB) and it will expand.

You can add another vdev of two 8 TiB drives for a total of 12 TiB capacity without destroying anything.

How to create simple flowcharts

It’s much harder to do than you realize, because Discourse’s “preview panel” (seen on the right side when you are writing a post) causes your text cursor to act buggy, and it will even apply an “auto-resize” which crops out the flowchart. :frowning_face:

In order to make a flowchart, you need to do it “blind”, without a real-time preview. (You have to memorize the syntax and rules.)

1 Like

My suggestions:

OPTION 1

  1. Start with a 2x4TB mirror vDev with the 8TB as a hot-spare.
  2. If you want to, buy a 2nd 8TB drive now and create a second 2x 8TB vDev in the same pool.
  3. Alternatively, wait until one of the 4TB drives fail, and then replace it with an 8TB drive and then migrate and replace the remaining 4TB drive with an 8TB one.

OPTION 2

  1. Create a 3-wide RAIDZ1 vDev for 8TB useable space.
  2. Replace drives as they fail with 8TB drives.
  3. In a few years you will probably have 3x8TB drives and have 16TB useable space.

My biggest limiting factor right now is cost. I want to eventually get up to a single raidz2 vdev of 16TB drives (I have 6 bays). I started with a 4TB drive and my plan was to slowly increase the size of the drives over time as I needed more space, but it seems I’ve made some misinformed decisions along the way.

Based on eveyone’s advice, I think I’ll get a 16TB drive now, keep my 2x4TB mirror, create a 2x8TB mirror, eventually get more 16TB drives to create a 16TB raidz2 vdev and start getting rid of the smaller capacity drives.

Just to double check you on this, you’re going to use the 16TiB drive that you purchase to create a 2x8TiB mirror with the 8TiB drive you already have (mirrors will fit to the size of the smallest disk, so you’ll lose 8TiB temporarily)? There’s nothing wrong doing it this way, just want to make sure that is your intention.

One thing of note is that if the 8TiB disk goes, the minimum size you’ll be able to attach is a 16TiB disk as you can only attach a drive the same size or larger.

Yep, exactly.

I don’t think that will happen as the 8TB is also new, but worst-case scenario I’m forced to get more 16TB drives earlier than expected.

1 Like

When you put in the final 16TB drive you will have 2x vdevs each of which is a mirror.

It will not be possible to convert this to a RAIDZ2 without taking off ALL the data and erasing the existing pool and creating a new RAIDZ2 pool. So you may be stuck with 2x 16TB mirrors instead.

Your proposed plan has you starting with 12TB of useable space, moving to 20TB of useable space when you replace the 8TB drive with a 16TB drive, and ending with 32TB of useable space when all drives are 16TB, but you will be stuck with 2x 16TB mirrored vdevs.

If you start now with a RAIDZ2 across the existing 4 drives you will have 8TB useable space. When you insert the last 16TB drive you will have 32TB space, but no growth in space in between.

You can’t nest zfs vdevs.

I believe technically the issue is the mutex code is not thread safe in this case.

BUT you could use Linux md to make a raid0 of the two 4TB drives, then use that as one of the devices in a zfs mirror with another 8TB drive.

This is pretty nasty though, and is really just a curiosity.

Instead, do what @essinghigh suggested and make two mirrors with the 4s and a pair of 8s.

1 Like

2 Likes