ZFS expansion-a-pollooza

I’m running a 24 bay 4U server chassis with all slots full. I’ve been running UnRAID for years but recently moved to TrueNAS Scale on Proxmox (yep I’m blacklisting and passing through the HBA only :slight_smile: )

Because all my slots are full and being an ex-UnRAID user I have a mix of 4TB, 8TB and 10TB drives (mostly enterprise SAS) that I’m shuffling data around on to progressively free up drives, add them to new pools, mount drives temporarily and copy data onto the pools. As I empty the drives I add them to the pool using single VDEV expansion.

Luckily across the 180TB of total drives I had about 35TB free that has given me some breathing room for the data-shuffle.

Have been getting excited about the ZFS expansion performance improvements in 2.3 (and in Fangtooth). Its a very slow process at Electric Eel speeds currently.

No questions here but thought I’d post my experiences as TrueNAS noob doing lots of VDEV expansion. FYI @Captain_Morgan

2 Likes

I’m a little confused about the state of RAIDZ expansion performance improvements. I’ve seen this:

and this:

but am sure I’ve seen notes of further perf improvements but for the life of me I can’t find them right now.

Feel free to chime in if you know what expansion performance improvements I can make to my ElectricEel-24.10.1 and what is coming down the pipe.

Its a good idea to measure the speed vs the disk sizes, RAID-Z size and how full the vdev is.

We can then estimate what Fangtooth might do and you can verify when its available.

If the data is critical… you should wait for release. if its a second copy, BETA might work for you.

1 Like

Luckily its all Linux ISO’s :wink: so nothing that I don’t mind losing - all my important stuff is elsewhere.

Will wait for Beta though. Cheers!

If you have less experience with ZFS, you might not know that their is a suggested maximum width for RAID-Zx vDevs. It is about 12 disks, though depending on number of slots, people have gone for more or less.

You’ve listed a chassis that has 24 disk bays, so I hope you were not using, (or planning to use), all 24 disk bays in a single RAID-Zx vDev.

1 Like

Yeah I’ve done my research - thanks for checking!

Of course since I’ve come from UnRAID I’ve only ever had to give up 2x drives for parity (with 22 drives for data) so have spent lots of time playing with ZFS calculators. Sound scary I know but remember these drives are mostly for media archival purposes and in 10 years of having this setup I’ve never lost any data (except for human error).

My drives:
3 x 4TB
17 x 8TB
6 x 10TB

Probably becomes:
1x RAIDZ1 @ 3 x 4TB
2x RAIDZ1 @ 8 x 8TB & 9 x 8TB
1x RAIDZ1 @ 6 x 10TB

quick question: are ZFS pools available for write access during RAIDZ expansion (assume if they are it will seriously impact the speed of the expansion).

also if anyone knows the best place to look for this type of info in the future please let me know

Yes, ZFS pools are available, including for write, during RAID-Zx expansion. Yes, their will be a performance impact, very user specific due to model of drives, connection method and CPU speed.

ZFS was designed to do as much as possible on-line. This is because in the bad old days of late last century, (before 2000), TeraByte file systems could take hours to perform file system checks at boot, (or prior to mounting). Sun Microsystems was trying to solve that problem plus others, and mostly succeeded.

1 Like

thanks @Arwen and nice Sun shoutout - I was a systems engineer there from 1994 to 2001 :slight_smile:

adding a 10TB SAS drive to existing RAIDZ1 pool of 4x 10TB SAS drives via RAIDZ expansion:

If you expand a pool you have to search information about the raidz_expand_max_copy_bytes parameter. I multiplied by about 3 the expansion speed (14 days for my first expansion, 4 day for the third). You can modify it even while the expansion is running

1 Like

With a good raidz_expand_max_copy_bytes :slight_smile:

1 Like

Thanks @FredHardy I’ve seen that in other threads - I increased mine during the last expansion and didn’t see any improvement (which just finished today) - that screenshot above had 1.6GB max_copy_bytes

Am I missing something?

:wink: “Linux ISO’s”. :wink:

1 Like

I think It depends on the size of your files (the smaller the slower). As I have 256GB of RAM I set it to 25GB.

1 Like

OK cool - waiting for my RAM upgrade to turn up, will give it a big boost hopefully.

Since its all media the file sizes are pretty large. Will report back once I have more RAM installed.

allocated more RAM to TrueNAS and set raidz_expand_max_copy_bytes to 8GB

she’s flying!

Interestingly it looks like this 8GB is accounted for under Services not ZFS Cache on the dashboard:

image

1 Like

Good!
TrueNAS is really memory hungry.
I was a bit upset that the default value for raidz_expand_max_copy_bytes made me wait almost 20 days more than needed for my pool expansion…

1 Like

Yeah that’s rough - I reckon balancing the needs of their enterprise customers (who have a different approach to VDEVS and pools) and their homelab customers is gonna require a bit of effort.

The neat thing about homelab customers - they often have jobs where they are recommenders (or buyers) of storage appliances at work :wink:

I expect the addition of RAIDZ single VDEV expansion is gonna see a huge influx of homelabbers - like me coming from UnRAID.

Same here, I am in the process of migrating from Synology DSM to TrueNAS Scale. Without the option of a future raidz single VDEV expansion I wouldn’t have considered the jump.

1 Like