Storage Dashboard - Usage @ 97%!

Total newb to Truenas and zfs so please be gentle.
My storage dashboard is showing 93.7% usage with only 3.39TB left. I have 2 iscsi datasets. I checked them both (connected to a windows server) and there is only maybe 10TB total of a 54TB pool.

I don’t understand how or where it is getting the 93.7% usage warning.

I ran the command :
root@truenas[/home/admin]# zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
boot-pool 881G 2.45G 0B 96K 0B 2.45G
boot-pool/ROOT 881G 2.43G 0B 96K 0B 2.43G
boot-pool/ROOT/24.04.2 881G 2.43G 8K 164M 0B 2.27G
boot-pool/ROOT/24.04.2/audit 881G 172K 0B 172K 0B 0B
boot-pool/ROOT/24.04.2/conf 881G 140K 0B 140K 0B 0B
boot-pool/ROOT/24.04.2/data 881G 280K 0B 280K 0B 0B
boot-pool/ROOT/24.04.2/etc 881G 6.75M 1.06M 5.68M 0B 0B
boot-pool/ROOT/24.04.2/home 881G 128K 0B 128K 0B 0B
boot-pool/ROOT/24.04.2/mnt 881G 104K 0B 104K 0B 0B
boot-pool/ROOT/24.04.2/opt 881G 74.1M 0B 74.1M 0B 0B
boot-pool/ROOT/24.04.2/root 881G 172K 0B 172K 0B 0B
boot-pool/ROOT/24.04.2/usr 881G 2.12G 0B 2.12G 0B 0B
boot-pool/ROOT/24.04.2/var 881G 66.1M 888K 33.1M 0B 32.2M
boot-pool/ROOT/24.04.2/var/ca-certificates 881G 96K 0B 96K 0B 0B
boot-pool/ROOT/24.04.2/var/log 881G 32.1M 0B 32.1M 0B 0B
boot-pool/grub 881G 8.20M 0B 8.20M 0B 0B
sxPool 3.39T 50.8T 0B 96K 0B 50.8T
sxPool/.system 3.39T 2.03G 0B 1.23G 0B 817M
sxPool/.system/configs-ae32c386e13840b2bf9c0083275e7941 3.39T 2.55M 0B 2.55M 0B 0B
sxPool/.system/cores 1024M 120K 0B 120K 0B 0B
sxPool/.system/netdata-ae32c386e13840b2bf9c0083275e7941 3.39T 814M 0B 814M 0B 0B
sxPool/.system/samba4 3.39T 220K 0B 220K 0B 0B
sxPool/sxData 3.39T 96K 0B 96K 0B 0B
sxPool/sxSQL 13.5T 10.2T 0B 16.9M 10.2T 0B
sxPool/sxtruenas 37.1T 40.6T 0B 6.90T 33.7T 0B

Assuming that the command output Is unreadable for me… The first thing i would check in your situation Is snapshot usage space

Sorry, don’t know how to format the output. Here is a screen shot.
I don’t think I have snapshots as I don’t even know how to do that yet in Truenas.

Enclose the output between the triple backticks like this:

```text
PASTE OUTPUT HERE
```

1 Like
root@truenas[/home/admin]# zfs list -o space
NAME                                                     AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
boot-pool                                                 881G  2.45G        0B     96K             0B      2.45G
boot-pool/ROOT                                            881G  2.43G        0B     96K             0B      2.43G
boot-pool/ROOT/24.04.2                                    881G  2.43G        8K    164M             0B      2.27G
boot-pool/ROOT/24.04.2/audit                              881G   172K        0B    172K             0B         0B
boot-pool/ROOT/24.04.2/conf                               881G   140K        0B    140K             0B         0B
boot-pool/ROOT/24.04.2/data                               881G   280K        0B    280K             0B         0B
boot-pool/ROOT/24.04.2/etc                                881G  6.75M     1.06M   5.68M             0B         0B
boot-pool/ROOT/24.04.2/home                               881G   128K        0B    128K             0B         0B
boot-pool/ROOT/24.04.2/mnt                                881G   104K        0B    104K             0B         0B
boot-pool/ROOT/24.04.2/opt                                881G  74.1M        0B   74.1M             0B         0B
boot-pool/ROOT/24.04.2/root                               881G   172K        0B    172K             0B         0B
boot-pool/ROOT/24.04.2/usr                                881G  2.12G        0B   2.12G             0B         0B
boot-pool/ROOT/24.04.2/var                                881G  66.1M      888K   33.1M             0B      32.2M
boot-pool/ROOT/24.04.2/var/ca-certificates                881G    96K        0B     96K             0B         0B
boot-pool/ROOT/24.04.2/var/log                            881G  32.1M        0B   32.1M             0B         0B
boot-pool/grub                                            881G  8.20M        0B   8.20M             0B         0B
sxPool                                                   3.39T  50.8T        0B     96K             0B      50.8T
sxPool/.system                                           3.39T  2.03G        0B   1.23G             0B       817M
sxPool/.system/configs-ae32c386e13840b2bf9c0083275e7941  3.39T  2.55M        0B   2.55M             0B         0B
sxPool/.system/cores                                     1024M   120K        0B    120K             0B         0B
sxPool/.system/netdata-ae32c386e13840b2bf9c0083275e7941  3.39T   814M        0B    814M             0B         0B
sxPool/.system/samba4                                    3.39T   220K        0B    220K             0B         0B
sxPool/sxData                                            3.39T    96K        0B     96K             0B         0B
sxPool/sxSQL                                             13.5T  10.2T        0B   16.9M          10.2T         0B
sxPool/sxtruenas                                         37.1T  40.6T        0B   6.90T          33.7T         0B

The dataset/zvol sxPool/sxtruenas appears to be the culprit. Notice the value under USEDREFRESERV.

Yes…I saw that but don’t understand what that is trying to tell me.
When I mount the ISCSI via my windows machine it shows the 7TB used and lots of disk space.

So kinda lost as to what is chewing up all the other space.

You set a large amount of “reserved space” at one point in time.

You can check/change this property of the zvol/dataset from the “Storage” or “Pools” page. In Core it’s under the “Pools” page → edit the zvol/dataset → “Reserve space for this zvol/dataset”. It should be in a similar place under SCALE’s GUI.’

EDIT: For a zvol it might be different. I believe you have a choice to create a “sparse volume” at creation time. With a dataset, the “reserved space” property is not permanent.

EDIT 2: I’m not sure how feasible (or “safe”) it is to shrink a non-sparse zvol. If you outright created a massive zvol, but did not set it as “sparse” during creation time, I’m not sure if you can safely shrink it. It might be possible, so long as it is not currently in use and you do not shrink it below its currently used capacity?

I have no experience with zvols, sorry. I can’t really say how safe such an action is.

EDIT 3: I would stop/disconnect any clients currently using the NAS and create a checkpoint before committing to any risky actions, if you decide to do something drastic to reclaim some space.

ZFS is Copy-on-Write: It keeps hold of the previous states of all blocks that are still referenced by a snapshot. Your volume may use 7 TB right now, but the ghosts of what it used to contain represent 33 TB.

I have a feeling a “very large” zvol was created, without checking the “Sparse” checkbox in the GUI.