Yes - the original RAIDZ expansion code had either poor design or a bug (depending on your viewpoint) which caused slow expansion. This is what happens with beta software - iX reasonably decided that the beta version was stable enough to be used in production, but clearly the full ZFS 2.3 release in Fangtooth will have fewer bugs or bad “features”.
It was iX that submitted a PR to remove the bottleneck.
This was as a result of ix adopting and shipping the code.
Well someone has to adopt it first, and when TrueNAS adopts it, then the TrueNAS Community tests it and gives feedback.
So a joint effort between iX and the community IMO.
I’d like to just piggyback here - I’ve managed to add a drive to my Z1 vdev using this method. Since my drives are relatively small, it took ~26hrs to complete. After finishing, it also did a scrub.
What I am interested in is the capacity I got left with. I started with 2x2TB and 1x4TB and added another 2TB drive. My Usable Capacity: is now at 4.71 TiB, where I thought it would be around 5.5-5.6 TiB.
Also, there was a rebalancing script I heard about - do I need to run this or am I OK to keep using my system as is?
There are two separate things here re. free space:
-
Yes, you can use your system as is, but keep in mind that your old data is still using the old data to parity ratio. If you’d like to improve that, you’ll would need to run a rebalancing script like this one. However please keep in mind that this will write all new data and parity blocks, so if you use snapshots, those will balloon in size.
-
Apparently the free-space reporting that TrueNAS uses is messed up after a raidz expansion. There are some CLI commands that will report the proper free space.
Perfect, thank you. I don’t mind the parity at this moment, and will run the rebalancing once I’m mostly done with expansion.
As for the reporting, I shall find a script to fix that. Thanks!
Do mind that as other mentioned it copies everything and deletes everything so snapshots should be deleted and also it’s best to have free space 30% atleast. As for the space you should have 5.4TB if you have 3x2TB and 1x4TB in Raidz1. Here is the script GitHub - markusressel/zfs-inplace-rebalancing: Simple bash script to rebalance pool data between all mirrors when adding vdevs to a pool.
Thanks for the link. As I read in the docs, it will reclaim the space over time anyway - so I might skip on the script for now, as I plan to add a disk or two soon and then just run the script to ensure everything is proper.