Pool is missing from GUI but visible in shell

I have had a torrent of HDD trouble lately, since a power failure and UPS failure simultaneously. I have everything MOSTLY back online, couple drives still resilvering, but…

  • I can’t see my main pool in the GUI “storage dashboard”.
  • no pools available to import
  • 1 unused disk waiting to replace my last faulted drive
  • no exported disks

system hard ware in signature, output of zpool list -v and zpool status -v below.

colin2000@freenas:~$ sudo zpool list -v
[sudo] password for colin2000: 
NAME                                                          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                                     448G  14.4G   434G        -       16G     0%     3%  1.00x    ONLINE  -
  nvme-Samsung_SSD_970_EVO_Plus_500GB_S58SNM0T906063B-part2   450G  14.4G   434G        -       16G     0%  3.20%      -    ONLINE
earth                                                        43.6T  5.58T  38.1T        -         -     4%    12%  1.00x    ONLINE  /mnt
  raidz2-0                                                   43.6T  5.58T  38.1T        -         -     4%  12.8%      -    ONLINE
    sdr2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
    sdp2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
    sde2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
    sdf2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
    sdl2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
    sdd2                                                     7.28T      -      -        -         -      -      -      -    ONLINE
kobol                                                         140T  64.6T  75.4T        -         -     6%    46%  1.00x  DEGRADED  -
  raidz3-0                                                    140T  64.6T  75.4T        -         -     6%  46.1%      -  DEGRADED
    sdg2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    sda2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    sdc2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    3994024730740366974                                      12.7T      -      -        -         -      -      -      -   FAULTED
    sdq2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    replacing-5                                                  -      -      -        -         -      -      -      -  DEGRADED
      sds2                                                       -      -      -        -         -      -      -      -   OFFLINE
      ee3c5456-d81b-4494-80d9-985c164207a7                   12.7T      -      -        -         -      -      -      -    ONLINE
    replacing-6                                                  -      -      -        -         -      -      -      -  DEGRADED
      16083278097396954957                                   7.28T      -      -        -         -      -      -      -   UNAVAIL
      79557534-d3e8-474f-8748-e0a8ec5a2606                   12.7T      -      -        -         -      -      -      -    ONLINE
    sdh2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    sdj2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    sdi2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
    sdn2                                                     12.7T      -      -        -         -      -      -      -    ONLINE
colin2000@freenas:~$ sudo zpool status -v
  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Jun 21 03:45:09 2025
config:

        NAME                                                         STATE     READ WRITE CKSUM
        boot-pool                                                    ONLINE       0     0     0
          nvme-Samsung_SSD_970_EVO_Plus_500GB_S58SNM0T906063B-part2  ONLINE       0     0     0

errors: No known data errors

  pool: earth
 state: ONLINE
  scan: scrub repaired 0B in 01:56:28 with 0 errors on Sun May 25 01:56:30 2025
config:

        NAME        STATE     READ WRITE CKSUM
        earth       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdr2    ONLINE       0     0     0
            sdp2    ONLINE       0     0     0
            sde2    ONLINE       0     0     0
            sdf2    ONLINE       0     0     0
            sdl2    ONLINE       0     0     0
            sdd2    ONLINE       0     0     0

errors: No known data errors

  pool: kobol
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jul 10 09:44:55 2025
        10.3T / 61.5T scanned at 14.9G/s, 6.22T / 61.5T issued at 806M/s
        1.13T resilvered, 10.12% done, 19:57:41 to go
config:

        NAME                                        STATE     READ WRITE CKSUM
        kobol                                       DEGRADED     0     0     0
          raidz3-0                                  DEGRADED     0     0     0
            sdg2                                    ONLINE       0     0     4
            sda2                                    ONLINE       0     0     4
            sdc2                                    ONLINE       0     0     4
            3994024730740366974                     FAULTED      0     0     0  was /dev/sdg2
            sdq2                                    ONLINE       0     0     4
            replacing-5                             DEGRADED     0     0     4
              sds2                                  OFFLINE      0     0     0
              ee3c5456-d81b-4494-80d9-985c164207a7  ONLINE       0     0     0  (resilvering)
            replacing-6                             DEGRADED     0     0     4
              16083278097396954957                  UNAVAIL      0     0     0  was /dev/sde2
              79557534-d3e8-474f-8748-e0a8ec5a2606  ONLINE       0     0     0  (resilvering)
            sdh2                                    ONLINE       0     0     4
            sdj2                                    ONLINE       0     0     4
            sdi2                                    ONLINE       0     0     4
            sdn2                                    ONLINE       0     0     4

i just noticed in zpool list there is a 7.28TB disk “unavail” in pool “kobol” - this should not be, there should not be that size disk in that pool

The drive that shows as 7.28TB, and UNAVAIL, is likely just a mistake due to a reboot re-ordering the drives.

This can be seen in the zpool status kobol which shows it used to be drive “sde2”. That is now actually part of pool “earth”.

Basically, wait til all the disk resilvers are complete. Then, either reboot or manually export the pool missing from the GUI. When that is done, try to import the pool from the GUI.

Please note that certain trouble shooting steps taken from the Unix SHELL are not recognized by the Middleware / GUI. Specifically command line importing of a pool.

One of the hardest lessons to learn about TrueNAS & ZFS, is what is easily done through the command line, AND won’t cause any difficulty with the Middleware / GUI.

1 Like

Thank you, I knew there had to be some reason why it was important to plug drives back into the same ports after completing troubleshooting…

So the pool, kobol, that should be doing the re-silveing doesn’t show up in the gui… But it is still re-silvering right? The icon at the top right of my dashboard that has the spinning arrows does not show up so I was not 100% confident… Although the zpool status command in Shell does seem to indicate that resilvering is taking place…

It’s probably going to be at least a couple more days before that resilvering is done lol so I’ll just have to try to be patient

Yes, if the GUI won’t show status, use zpool status kobol from the Unix SHELL command line. As long as it is progressing, let it do it’s thing.