Create RAIDZ with HDDs of different sizes on console

Hi there,
I’m sorry if there is another entry to that topic but I didn’t find a solution to this one:
What is the command line to create a RaidZ like that:
vdev1: sdd 1.8 TB
vdev2: sda 0.15 TB, sdb 0.15 TB, sdc 0.5 TB, sde 0.5 TB

Yes, I know that different sizes of discs can lead to problems but I’d like to do that out of experimental thinking.
I tried with this one:
sudo zpool create -f -m /mnt/data data ??? sda sdb sdc sde ??? sdd. But what can I insert for the “???” I thought it would be “stripe”, but that doesn’t work.
Thanks for all help and, if needed, patience
Wolf
PS: Yeah, I’m new to this… :slight_smile:
PPS: I drove “Perplexity” nuts with this question. It turned and turned an didn’t find a correct answer. Was quite funny to look at.

You can do this with Unraid.

TrueNAS is not a good candidate for these types of setups.

2 Likes

You can create a raidz from the GUI even with unmatched drives.

But it seems that what you want to do is a mirror between a drive and a stripe.
Short and definitive answer: You can NOT do that with ZFS.

Here Be Data-Eating Dragons

You may be able to create a mdadm RAID0 with the small drives and then pass the resulting device to ZFS to make a mirror. Good luck maintaining that thing when a drive will fail.

1 Like

Here me out on this.

Create two pools (A= single drive and B stripe) and replicate from pool A to pool B on a nightly or so basis?

It has all the downsides possible but this could work…

Why though?

Other than academic exercises and curiosity, why would you put your data at such risk?

If you really want to put together a random assortment of drives, Unraid is designed to do just that.

With Unraid, you can create a storage pool that offers 3.1 TB capacity with no redundancy or 1.3 TB capacity with full redundancy. With the second choice, you can later add another 1.8 TB drive to the JBOD to bring yourself to 3.1 TB capacity with redundancy.

Are these just drives you have lying around and you don’t want to spend money for newer high capacity drives?

To be true, yes. I’m looking for the right server/nas system to replace my old Synology 218+. The DSM is quit good and easy to use, but after the news that Synology will only accept its own HDDs I’d thought to look elsewere. I had unraid in mind, but its closed source and costly. First I tried OMV, but its too much of a kludge. Then I tried TrueNAS and it felt right so far. Good GUI. But I never “worked” with RAID. At DSM it simply runs and you have nothing to decide.

Oh curiosity was the only reason.

1 Like

There is a recent event brewing in the ZFS world related to this:

The makers of HexOS are sponsoring development with the goal of making ZFS handle mixed drive sizes better.

We’ll see how it pans out.

Edit:
Relevant commentary from a Klara dev:
https://www.reddit.com/r/zfs/comments/1ktm9zv/comment/mty9x4j/

2 Likes

I can’t imagine anything going wrong…

How? Magic?

No “50% loss” in capacity that a mirror vdev offers, yet with the same redundancy as a mirror vdev? Sounds more like RAIDZ than a mirror.


I read the entire page and I still don’t know what they’re talking about.

They don’t show you how, they just tell you it is so. How are those four disks configured exactly?

I added a reddit link with commentary, it goes into some depth at least.

1 Like

Yeah, real “super-simple”.

A two-way mirror using 3 drives… which can only handle the loss of one drive from the vdev, where 64GB “chunks” are shuffled across the devices.

Why wouldn’t you just use RAIDZ at that point?

All the best of luck to them. Seeing as how in 8 years they never implemented multiple keyslots for encryption, the “oopsies” with block-cloning and then later indirect vdevs, and how they shipped a “good enough but not complete” feature in “corrective receives”, I’m not holding my breath for this to be safe enough for serious data preservation.

Call me negative.

1 Like

That I can answer: No extra space used for parity.
It’s all for HexOS users who want to throw any odd disk they have lying around, like they would do with Synology or Unraid.

“Mirror” first, then “raidz1”. Further parity levels may come later. Or not, since the target users would not bother anyway. Or because it gets too complex to implement.
With some efforts, let’s not despair that OpenZFS can be brought level with btrfs, the great Linux CoW filesystem which cannot manage parity arrays . :roll_eyes:

1 Like

In all fairness, would that system not have a potentially higher read speed than RaidZ?

It does make it easier and cheaper (read: needs fewer disks so less space, smaller PSU) to expand mirrors made up of smaller drives. And that is one measure of efficiency especially in a SOHO environment.

Sounds similar to how Drobo’s “beyond RAID” used to work

I’ve started a thread on the “ZFS AnyRAID” feature:

1 Like