Lost with ZFS, usable and free space etc

Hello,

To start with, I’ve googled a lot before posting, I found all sort of things but… nothing100% clear.

I’ve started a truenas 25.04 last week, initially with 4 x 8TB Raid z1. Since then I’ve added 2 x 8 TB disks, each time the extension and following scrubbing went fine, so I’ve 6 x 8TB now. Truenas reports each of them as 7.24 TiB, OK.

NAME STATE READ WRITE CKSUM
RAID ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
4eef6dee-2dd0-4ab9-a4c8-a9790ebd5796 ONLINE 0 0 0
7429b2d7-00db-4e39-a735-589c4978b432 ONLINE 0 0 0
28f9cf8e-bec5-4b8f-ad15-84745dafdfc9 ONLINE 0 0 0
bd16b780-edad-45d1-9245-4b15251b3872 ONLINE 0 0 0
cc13d711-b122-444d-b994-83d3bb279305 ONLINE 0 0 0
2c3402cf-79c1-4040-aed5-fe1bac03c7e5 ONLINE 0 0 0

I have one z1 array called RAID and several SMB shares/datasets on it. I mirrored what I had on my previous NAS (QNAP), 1 Raid-5 array and several shares on it.

Now if I do a zpool list it seems fine, OK these are raw capacities:

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
RAID 43.6T 22.7T 20.9T - - 0% 52% 1.00x ONLINE /mnt

But then things start to degrade… in the UI, it tells me that usable capacity is 31.58 TiB. I expected 5 x 7.24 ie around 36. Why such a difference? I have no snapshots, dedup, nothing, just plain. I have just one app installed (Plex) which probably doesn’t use much there.

And then, looking at the smb shares it becomes a bit weird… At least SMB as seen from a Windows client mirrors what is seen with a df -h, which is for the shares:

Filesystem Size Used Avail Use% Mounted on
RAID 16T 256K 16T 1% /mnt/RAID
RAID/1 16T 412G 16T 3% /mnt/RAID/1
RAID/2 16T 278G 16T 2% /mnt/RAID/2
RAID/3 26T 11T 16T 41% /mnt/RAID/3
RAID/4 17T 1.8T 16T 11% /mnt/RAID/4
RAID/5 16T 242G 16T 2% /mnt/RAID/5
RAID/6 16T 466G 16T 3% /mnt/RAID/6
RAID/7 18T 2.6T 16T 15% /mnt/RAID/7
RAID/Backup 16T 634G 16T 4% /mnt/RAID/Backup

The filesystems sizes seem to be equal to “available+used” and evolve depending on the usage. This is quite unusual, maybe specific to ZFS?! I’ve never seen this in classic Linux with ext3 or ext4. Anyway, there’s probably no choice there even if it is a bit confusing.

But where are the +/- usable 5TB missing?..

Thanks.

Looks like you found good information then! :laughing:

Space reporting is borked after raidz expansion because the old data:parity ratio is still used.
Good news: Space is there. Bad news: The only definitive fix is “backup-destroy-restore”.
You’ve already found that zfs and zpool give different indications…

ZFS reports on what’s actually used on disk, taking account of compression.
df -h on the client always sees decompressed data.

No dedup is best for you. But you really should leverage periodic snapshots, for some safety against ransomware and user errors—and then replication, backups, 3-2-1, etc.

Please use formatted text </> when pasting terminal output, for the sake of readability.

2 Likes

Thanks. The df -h was done on the nas.
Backup/destroy/restore, uh , I need to buy additional USB HDD’s and it will take again 2 weeks :frowning: This will never be fixed?