Missing space after adding a drive to raidz1

I seem to be missing some avalible space after i expanded my Raidz1 Vdev from 3x6tb to 4x6tb. My calculations says i should have roughly 17TB worth of space after the expansion but truenas GUI only reports 14,41TB as usable space zpool status -v reports the following:

                      Size  Alloc  Free

Stash2 21.8T 10.3T 11.5T - 0% 47% 1.00x ONLINE
raidz1-0 21.8T 10.3T 11.5T - 0% 47.3% ONLINE
1039e82e-a379-4d6b-a8ae-f94ae081672e 5.46T - - - - ONLINE
a72769e8-0333-4174-b4f6-a6e429159793 5.46T - - - - ONLINE
37a4afec-8ad6-44a2-a7af-c32c0f5f1503 5.46T - - - - ONLINE
08bc23b3-c4e8-49b4-9a20-54f40e4a9e60 5.46T - - - - ONLINE

which leads me to believe that it might be the GUI giving me misleading information? At the same time allocated space doesn’t correlate the way it should? My 3x4tb drive has a total space of 10.9TB and an Alloc at 7.8 wich with taking overhead into account sounds about rite as the avalible space is 7.14TB.. Please help me make sence of this :smiley:

I did try a zfs rewrite after the expansion was completed to see if that was the problem but it doesn’t seem to have changed anything.

Anyone more experienced got a clue at what’s going on and if there is a fix to get it to show the right avalible space?

Regards

There’s a bug with space reporting after raidz expansion…

Thanks, I’ve seen some old post about it but they seemed related to expanding after swapping to bigger drives, Not adding new ones to the pool. Seems i was wrong.

You wouldn’t happen to have a solution for said bug?

There currently is no solution.
For more information see

1 Like

Aight, That sucks. only getting. Do you know if this only affects the reporting? And will it cause issues when my data get’s bigger then the reported space in that case?

Or am i better off moving off the data and recreate the pool if i want to be able to use the full space?

It only affects reporting, the space is available.
The issue is that the old data was written with a different parity ratio then new data will be written with. You can somewhat mitigate the issue by rewriting the data so the old data gets rewritten with the newer parity ratio.
That can be done with the now build in zfs rewrite (only available via cli) or 3rd party rebalancing scripts for zfs (also only cli)

I did run a zfs rewrite -rv /mnt/Stash2 after the initial expansionprocess was done but it didn’t have any effect that i can see.

Guess i will live with it for now and hope there will be a fix in the future. If not i’l cross that bridge when i come to it :slight_smile:

Thanks for the info mate :ok_hand: