I’m running a 24 bay 4U server chassis with all slots full. I’ve been running UnRAID for years but recently moved to TrueNAS Scale on Proxmox (yep I’m blacklisting and passing through the HBA only )
Because all my slots are full and being an ex-UnRAID user I have a mix of 4TB, 8TB and 10TB drives (mostly enterprise SAS) that I’m shuffling data around on to progressively free up drives, add them to new pools, mount drives temporarily and copy data onto the pools. As I empty the drives I add them to the pool using single VDEV expansion.
Luckily across the 180TB of total drives I had about 35TB free that has given me some breathing room for the data-shuffle.
Have been getting excited about the ZFS expansion performance improvements in 2.3 (and in Fangtooth). Its a very slow process at Electric Eel speeds currently.
No questions here but thought I’d post my experiences as TrueNAS noob doing lots of VDEV expansion. FYI @Captain_Morgan
If you have less experience with ZFS, you might not know that their is a suggested maximum width for RAID-Zx vDevs. It is about 12 disks, though depending on number of slots, people have gone for more or less.
You’ve listed a chassis that has 24 disk bays, so I hope you were not using, (or planning to use), all 24 disk bays in a single RAID-Zx vDev.
Of course since I’ve come from UnRAID I’ve only ever had to give up 2x drives for parity (with 22 drives for data) so have spent lots of time playing with ZFS calculators. Sound scary I know but remember these drives are mostly for media archival purposes and in 10 years of having this setup I’ve never lost any data (except for human error).
My drives:
3 x 4TB
17 x 8TB
6 x 10TB
Probably becomes:
1x RAIDZ1 @ 3 x 4TB
2x RAIDZ1 @ 8 x 8TB & 9 x 8TB
1x RAIDZ1 @ 6 x 10TB
quick question: are ZFS pools available for write access during RAIDZ expansion (assume if they are it will seriously impact the speed of the expansion).
also if anyone knows the best place to look for this type of info in the future please let me know
Yes, ZFS pools are available, including for write, during RAID-Zx expansion. Yes, their will be a performance impact, very user specific due to model of drives, connection method and CPU speed.
ZFS was designed to do as much as possible on-line. This is because in the bad old days of late last century, (before 2000), TeraByte file systems could take hours to perform file system checks at boot, (or prior to mounting). Sun Microsystems was trying to solve that problem plus others, and mostly succeeded.
If you expand a pool you have to search information about the raidz_expand_max_copy_bytes parameter. I multiplied by about 3 the expansion speed (14 days for my first expansion, 4 day for the third). You can modify it even while the expansion is running
Thanks @FredHardy I’ve seen that in other threads - I increased mine during the last expansion and didn’t see any improvement (which just finished today) - that screenshot above had 1.6GB max_copy_bytes
Good!
TrueNAS is really memory hungry.
I was a bit upset that the default value for raidz_expand_max_copy_bytes made me wait almost 20 days more than needed for my pool expansion…
Yeah that’s rough - I reckon balancing the needs of their enterprise customers (who have a different approach to VDEVS and pools) and their homelab customers is gonna require a bit of effort.
The neat thing about homelab customers - they often have jobs where they are recommenders (or buyers) of storage appliances at work
I expect the addition of RAIDZ single VDEV expansion is gonna see a huge influx of homelabbers - like me coming from UnRAID.
Same here, I am in the process of migrating from Synology DSM to TrueNAS Scale. Without the option of a future raidz single VDEV expansion I wouldn’t have considered the jump.