Hello! I’ve been running a 4 wide RaidZ2 (4 x 8 TB). As expected, my storage dashboard then reported approximately 14 TiB of usable capacity.
A few days ago, with Electric Eel now being available, I decided to pick up another matching drive whilst it was on sale and went through with the process of extending my VDEV to 5 wide (5 x 8 TB)—which seemingly went smoothly and without a hitch. However, now post-expansion I’m seeing my Usable capacity at just 17.49 TiB, rather than the ~20–21 TiB I had expected. I figured this may be related to the old parity data so I went ahead and did an in-place data rebalance by manually rewriting the data (using rsync -a
to simply make a new copy of each file and then deleting the old data after manually verifying it).
However, doing so did not change it, all it did was to weirdly reduce the amount of used capacity well below the total of the actual files. A dataset containing 200 GB worth of files, for example, would only list approx. 162 GB of used capacity in the UI. This also applies when reviewing the shares in Windows: checking the properties for any file or folder would list the correct file size in the “Size” property but a far lower (consistently around 81%) in the “Size on disk” property. The same thing is seen in the CLI if I run du -hs
, for any given directory, where it will report the smaller size. The behavior is almost as if there was a ~20% compression applied across every new file I add to the pool (including anything added beyond the rebalance), yet zfs get compressratio
reports a ratio of 1.00x (which is expected, most of this data is already compressed or otherwise largely incompressible).
Thankfully, this appears to be purely visual, but I’m wondering if this is maybe a bug or if I’m perhaps just missing something?