Dataset out of space

Lets preface this with, I taught myself enough to set this up and then forgot everything while it worked for years. I’ve now hit a problem and either it can’t be done or I’m not searching for the right things when google’ing the problem.

This is a Truenas Core setup running the latest available. I’m going to be moving to Scale at some point soon but at present, I still use OpenVPN. I know that is deprecated now so need to look at options.

The problem I have is that my main pool and dataset are full. When I first created the pools, my idea was to have a main pool and an archive. Over the years, the archive fell away so it hardly has anything but yet as 6Tb allocated. My main pool is full.

I’m trying to figure out how to re-allocate some of the 6Tb sat in my other pool but I can’t figure out how or even if its possible. If not, my main pool also has the iocage dataset in there which has 800gb free. If I could allocate from that dataset within the same pool, that should elevate my problems for a little while.

If there is some guide out there that I’m just not finding, I apologise.

Post the output of zpool list -v.

root@HJC1[~]# root@HJC1[~]# zpool list -v
NAME                                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ                                FRAG    CAP  DEDUP  HEALTH  ALTROOT
archive                                 9.06T  69.4M  9.06T        -         -                                  0%     0%  1.00x  ONLINE  /mnt
  mirror                                9.06T  69.4M  9.06T        -         -                                  0%     0%
    gptid/4464f458-6038-11e9-a09f-5480284fde0c      -      -      -        -                                      -      -      -
    gptid/47de1afb-6038-11e9-a09f-5480284fde0c      -      -      -        -                                      -      -      -
freenas-boot                             118G   767M   117G        -         -                                   -     0%  1.00x  ONLINE  -
  ada4p2                                 118G   767M   117G        -         -                                   -     0%
top                                     1.81T  1.01T   822G        -         -                                 21%    55%  1.00x  ONLINE  /mnt
  mirror                                1.81T  1.01T   822G        -         -                                 21%    55%
    gptid/cab1b7bb-5f8d-11e9-bfb3-5480284fde0c      -      -      -        -                                      -      -      -
    gptid/cdcef639-5f8d-11e9-bfb3-5480284fde0c      -      -      -        -                                      -      -      -

Thanks.

Your main pool (top) doesn’t seem to be full. You still have 822G free.

Also, is your main pool ssd or hdd?

The mail pool is where the main dataset is but also the IOCAGE dataset. The iocage has the 800G left. The main dataset has about 100Mb.

All drives are HDD. I don’t need speed for my purposes. I don’t want to delete the iocage dataset but if I could grab 700G from its allocation, that would work for now.

I don’t have any experience with core or freebsd. So can’t say anything about iocage.

One approach I could suggest is:

  1. Backup archive pool data (70MB).
  2. Destroy archive pool.
  3. Add a new mirror VDEV of these (now free) 10TB drives to the main pool.

2tb and 10tb mirrors combined should work ok, but it seems a bit unreasonable (for various reasons). You could then remove the 2tb mirror VDEV (from the GUI). Thus, your main pool would become a 10TB mirror.

:warning: DISCLAIMER: I never did it myself; you should must at least get other opinions before committing to this approach.

I did something similar on a test system several years ago. Had 2 pools (pool1, pool2), destroyed pool2 added the now free drives to pool1 then replaced the smaller drives of the pool1 pool with larger after everything synced up. The pools were not mirrors (were raidz1), but I don’t see any obvious reason why what @swc-phil proposed would not work. Just make sure you have backed up your data and saved the config.