ZFS Pool Extend Weird Behavior

Okay, so I bought a new NAS. This is my first use case of ZFS and Truenas, so excuse my ignorance. I bought 3 8 TB hard drives along with 1 I already had. The one I already had needed to be emptied into the pool before I could add it to the pool, so I made the RaidZ1 array with 3 disks and used the new “Extend” feature to add the fourth disk. When I did the extend feature, it was stuck on 25% waiting for expansion to start almost all night. I woke up this morning to check in on it and it was still on 25% when I logged in, then I refreshed and the job disappeared. My capacity did extend, but I think some of my available space is missing. If I have RaidZ1 with 4 7.28 TB drives (so really 3 drives because of the parody) that should be 21.84 TB. After the extension, I have 19.21 TB available. Which is more, but there are just shy of 3 TB missing. Did I do a calculation wrong, or did something go wrong with the extension?

It’s a know GUI issue but reporting is correct when using the CLI (command line interface)

All of my connected devices also read the 19 TB number though. How can I check the capacity with the CLI?

sudo zpool list should tell you. :slight_smile:

You can find the CLI through SSH or at a web shell under the System menu.

1 Like

sudo zpool list

should show you the info

1 Like

Ok, so I ran the command, and you are correct, the CLI shows the 21 tb number, but all of my connected devices show only 19tb available through file explorer/finder. Will this ever change, or will I only be able to write the 19 tb?

You want to try to keep your pool below 80% capacity. Above 90-95% and it can and will cause problems with ZFS. It needs free space with how it works. If it is block storage, like iSCSI. The recommendation is below 50%

You have up to the CLI reported numbers. You can keep on eye on your pools that way. The GUI will still report and warn off the other numbers, I think. Is your % used the same as before and after your expansion?

The % went from like 31% to 27%. Something along those lines.

8TB (TB=10^12) is 7.27TiB (TiB = 2^40). So as you say, a 4x 8TB RAIDZ1 should report 21.81TiB.

The “available” statistic is normally the free space that is STILL available to be used. Assuming that your original 4TB drive was 75% full, that would account for the “missing” 3TiB.

31% of 14.54TiB is c. 4.5TiB
27% of 21.81is c. 5.89TiB.

Storage of blocks written on a 3x RAIDZ1 (2 + 1 parity) is 33% less efficient than new blocks written on a 4x RAIDZ1 (3 + 1 parity), and also small blocks on a 3x RAIDZ1 may actually only be (1 + 1 parity). The space used is reported as useable space by dividing the actual space used including parity by: for 3x RAIDZ1 2/3 and for 4x RAIDZ1 3/4. So when you add an extra drive, the space used by blocks written when it was a 3x may be being reported as if they were written on 4x and so appear to be 1/12 bigger.

1 Like

This makes total sense in explaining the number in TrueNAS. However, I’m still having a hard time understanding why my file explorer shows 19.8 TB (18.008 TiB) available. It would make sense if it said 19.8 TiB in Windows which would be the 21 TB number. Could you possibly elaborate on that?

It requires an upstream fix in OpenZFS space accounting, so once that’s created, tested, and merged, it can make its way into TrueNAS. We don’t have a timeline on that unfortunately.

The actual free space as reported by zpool list is correct - you’ll be able to write to the full 21T (well, not ALL of it, leave some space for overhead) that was added.

1 Like

Ok, So I sort of answered my own question just now. Went into the properties on Windows settings and looked at the Bytes number. Windows is showing 19.1 TB instead of TiB. The 19.1 is in fact the TiB number because in the properties, the bytes are 21 TB. I’m not sure why Windows is reading it as TB instead of TiB, but it’s probably Microsoft being stupid. I hope that makes sense.

Decimal vs binary could be a factor, but there is a difference in space accounting certainly with the RAIDZ expansion that we’re still working to improve.

Note that Windows also doesn’t understand the transparent compression used on TrueNAS, so if you write 1TB of very compressible data to your system, it might still show more than 18TB of “free space” remaining.

1 Like

Firstly, TrueNAS is good at being precise about TiB vs. TB which is not the case with Windows that uses TB when they should use TiB. You would have to look at the SMB spec to see how free space is reported over the network, but I suspect that this is simply sloppy nomenclature on Microsofts part rather than a UoM conversion error (i.e. rather than get a number reported over SMB, think it is in TB, convert it to TiB and then display it as TB, I suspect they get it in bytes and comvert to to TiB but display it as TB).

It may well be the case that ZFS is reporting available space on an expanded pool incorrectly, but see also NAS-132559 where I suggest an alternative and possibly better way to calculate available free space rather than relying on the ZFS dataset stats.