Raidz1 possible with 2 drives?

Hi,

I’m a beginner Truenas Scale user. I’ve had Truenas up and running for a while now, and it’s been working near perfectly. I use it as a home lab for Home Assistant, Plex, AdGuard, etc.

For financial reasons, I started with just one 8TB drive. Now, the time has come to upgrade in capacity and redundancy. I want to upgrade to 3x8TB in RAIDZ1.

Can I create a 2-drive RAIDZ1, move all the data to the new pool, wipe the original drive, and use it to expand the new pool to the final 3x8TB RAIDZ1? (The expand should be possible with the new electric eel). The information I’m finding about creating a 2-drive RAIDZ1 pool is conflicting. Is this possible, or am I making mistakes? Maybe there are better ways?

Thanks for the help in advance
Mathias

A slightly better way is to create a degraded 3-wide raidz1 with two drives and a sparse file, and then remove the file.
In any case you’ll need to use the CLI.

2 Likes

I don’t know about that. OP’s proposal gets the data onto redundant storage sooner, but it does rely on two new features (RAIDZ expansion and two-disk RAIDZ1)–and though I’m meh on the value of rebalancing generally, in this case it would probably be a good idea.

Your suggestion leaves the data on non-redundant storage longer, but involves fewer steps and well-tested ZFS features. Both, as you say, are going to involve a bit of work at the CLI.

It’s possible in EE, but must be done at the CLI.

Not expanding the pool will give proper space reporting, that’s the benefit.

1 Like

While is would be a pain to lose data. There is nothing curtial on it. Getting the data faster on redundant storage is not a realy problem for me.

Working the CLI without a serious guide is beyond my skill lvl for the moment.
Best possibility wil probably be to find/borrow/rent external drives make a bakcup on that. Make the new pool the proper way en return the backup.

We do discourage the use of the Unix Shell to perform pool or dataset (re-)configurations. Mostly because the GUI won’t recognize some of them without either a reboot or export / import of the affected pool.

I personally discourage people from using ZFS pool or dataset commands from the Unix Shell when they don’t know much about ZFS commands. There are some safe guards in place with ZFS commands, but the GUI adds in many more.

While ZFS stands pretty much alone as the best file system, volume manager & RAID system for data integrity, it’s not trivial. Especially to people with less or no Unix Shell experience.


All that said, I do highly recommend all TrueNAS users, Core or SCALE, to learn about ZFS command line. It is easy enough to start with non-destructive data gathering commands & options. Then learn more & more. The goal in my opinion, is that if something goes wrong, the TrueNAS user will have some knowledge.

One reason I say to learn ZFS, is that lots of the home and small office users don’t have iX support contracts, use the forum or Google for support. And generally don’t have ideal or good server grade hardware. So hopefully by the time such a user needs help, they have learned at least enough ZFS to describe the problem well.

We sort of need a step by step guide to learning ZFS, when used with TrueNAS. But, that is beyond my capability…

@dan’s Uncle Fester’s Basic TrueNAS wiki might be a good place to host this.

Maybe, but I tried to create an account and it complained about mail relay. Trying again, said account already exists. Logging in says I have not verified my E-Mail address, but I did not get an E-Mail, (mail relay…).

So @dan would have to either let specific people “in”, or perform the work himself.

@dan can fix your account so you can do it. Or you can send me the stuff you want to see there and I can add it.

@dan took care of it.

This.

The space reporting will be stuck at 50% parity forever (until a possibly mythical zfs feature is implanted)

Ie, each 8TB drive will only appear to add 4TB of space.

I’d suggest creating the degraded raidz1, then copying the data to that, then replacing the sparse drive with the original.