Full server - cannot get space back

My TrueNAS server filled up unknowingly.

I have a 20 TiB iSCSI drive that’s taking up 48 TiB total with snapshots.
So I went to clear those snapshots since it’s backed up a different way anyways, and I got the “Channel number out of range” error.
So I went to delete an old SMB share that hasn’t been used in months and ran into the same error.
So I went to delete the files through CLI

  • cd /mnt/zpool0/plex-media
  • rm -rf * -v
  • confirmed that this folder is now empty

But the space has not been reclaimed, my TrueNAS is still full and I’m still unable to do anything. The data written on this old SMB share has dropped to 192 KiB, but the used space has remained the same at 10.66 TiB.

Does anyone have any ideas? I have a 12x 12 TB disk array that’s 2x 6-wide RAIDz2, so replacing the disks is not a very viable option at the moment.

How much space should of been cleared from deleting said files?

It can take a little for the system to clean it’s self out and show, you could do a reboot to see what it comes back with?

Is plex-media a dataset? If so, can you destroy it? From the GUI would be better, but the CLI would work if the GUI doesn’t: zfs destroy zpool0/plex-media.

It should have cleared out 4 TiB. These were the original numbers:
image

Which is what it’s still sitting at. I will give a reboot a try

I did try to delete it from GUI again after clearing out the folder, I get the same error about destroying snapshots “Channel number out of range”

Sorry for the delay. I did a reboot but now it seems I’m having a time issue. I can’t get back into GUI “Invalid or expired OTP. Please try again.”

Time was off by 10 seconds. For the most part I have that fixed, but getting the same error. Not sure what to do from here.

Ended up just resetting the password from iDRAC so I could get back in. Storage is still at 100% and the plex-media dataset is still at 11 TiB allocation
image

If you are trying to delete ALL of the snapshots but leave the data itself alone, the command you’re looking for is

sudo zfs destroy -r zpool0/plex-media@auto-*

After which it may take some time for the space to become available again. You can do

sudo zpool get freeing zpool0

and it will tell you how much space it’s still reallocating for use.

1 Like

I gave this a try but ran into the same error when I try to delete it from GUI. You see me run it as root, this was only attempted after sudo zfs didn’t work


image

I just want to say I appreciate any help anyone gives. I know some basic stuff but I’ve never had to go this deep before so it’s unfamiliar territory.

It looks like it’s Core Dumped which isn’t a good sign. Can you try deleting them one at a time?

I ran it one more time and got a slightly different result:

One thing I noticed while looking around is I’m not sure if I’m just interpreting this wrong, but this feels incorrect to me:

admin@valk-truenas02[~]$ sudo zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
boot-pool 102G 5.08G 96.9G - - 3% 4% 1.00x ONLINE -
zpool0 131T 80.3T 50.7T - - 1% 61% 1.00x ONLINE /mnt

That compared to GUI

You just attempted to delete the snapshots and the original dataset here, so it’s probably good that it failed.

Can you try and delete them one at a time?

 zfs destroy -r zpool0/plex-media@auto-2024-08-23_00-00

Ths looks like the most recent one.

This was the result from that:
image
admin@valk-truenas02[~]$ sudo zfs destroy -r zpool0/plex-media@auto-2024-08-23_00-00
internal error: cannot destroy snapshots: Channel number out of range
zsh: IOT instruction sudo zfs destroy -r zpool0/plex-media@auto-2024-08-23_00-0

Can you show me

zpool status zpool0

Absolutely:
image

admin@valk-truenas02[~]$ sudo zpool status zpool0
[sudo] password for admin: 
  pool: zpool0
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 11:23:00 with 0 errors on Mon Jun  3 11:23:02 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        zpool0                                    ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            f356a61e-00af-48b8-921c-1303dbd6621a  ONLINE       0     0     0
            a4e2b9d4-cadc-43f4-8059-f10524d5f0e7  ONLINE       0     0     0
            852a402a-a314-4e2f-8866-ee363d7e75d8  ONLINE       0     0     0
            5570c12b-a337-481e-852b-d691d0c0f550  ONLINE       0     0     0
            d6d0bfcd-6259-45e4-9f95-8928f3d3f566  ONLINE       0     0     0
            51c79940-6f13-4346-bf57-c35a70d8a617  ONLINE       0     0     0
          raidz2-1                                ONLINE       0     0     0
            85f026e3-7a82-40a3-9198-a413999a2bd7  ONLINE       0     0     0
            6934841d-56f7-42a7-bfce-707454ca1837  ONLINE       0     0     0
            7a1a5137-1098-4fa4-a7b9-5c12e2356c46  ONLINE       0     0     0
            e700d3cf-c6f2-4024-8b3a-87bf18407f70  ONLINE       0     0     0
            0e62b071-bba2-4e1d-88da-4a1372e9ebd7  ONLINE       0     0     0
            42ad5b84-01d7-4062-92fa-390793cc5b49  ONLINE       0     0     0

errors: No known data errors

Hmm…

How about this command

less /var/log/messages | grep -i -e zfs -e kernel

You can PM me the output if you want. It will be long.

Sent you a message with the output

edit: it was too long to message lol. I have it here: output from less /var/log/messages | grep -i -e zfs -e kernel - JustPaste.it

Hmm. I’m not sure, I don’t see anything obvious that stands out. Can you open a bug report

I’ve created one here in case you were interested in watching it - [NAS-130865] - iXsystems TrueNAS Jira

From your perspective, do you think my only current option would be to replace all the drives with larger ones?