Can't import pool via web GUI or CLI

Inexperienced user here. Much hand holding required.

Recently migrated from CORE to SCALE (24.10.2), in order to extend my vdev with another drive. When it’s done, I started having issue with SMB hanging up, so I decided a fresh install. When I try to import pool from the web GUI, it gets stuck at 0% for hours. Started google-fu to import via CLI, still no success.

root@truenas[/home/truenas_admin]# zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors
root@truenas[/home/truenas_admin]# zpool list          
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool   464G  2.63G   461G        -         -     0%     0%  1.00x    ONLINE  -
root@truenas[/home/truenas_admin]# zpool import   
  pool: tank
    id: 9307007750265967810
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

        tank                                      ONLINE
          raidz1-0                                ONLINE
            45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8  ONLINE
            45ab5b24-fc78-11ec-a7a0-bc5ff48ba3c8  ONLINE
            cd7e41f9-4c94-441d-89b8-d45250d3150a  ONLINE
            d435051f-34bd-4606-8e44-1e2c58249fc1  ONLINE

zpool import -f tank freezes my shell, but still doesn’t show the pool in my GUI. typing it again shows:

root@truenas[/home/truenas_admin]# zpool import
no pools available to import

What should I try next?

Was your pool GELI encrypted?

It would also be interesting to know what hardware is being used here, and if you’re using virtualisation, as (a misconfigured VM) can sometimes cause the error message you’re seeing.

Did you make a backup of your TrueNAS configuration files and then restore those after your fresh install?

I too am curious if GELI is involved.

EDIT: Don’t force the import, wait for advice. You may need to roll back to CORE if you don’t have a backup of your data.

@Protopia they’re not. I remember following a tutorial to migrate and it asked to backup dataset keys, but the option wasn’t there for me.

@neofusion
no virtualization
i5-3570k
asrock z77 extreme4
12 gb ram (2x4gb & 2x2gb)
500gb SSD
4x 8TB HGST drives

@joeschmuck Did backup config and restore. Is it even possible to go back? I remember doing an upgrade to my pool in SCALE and there was a warning saying it’s not backwards compatible.

You did the one thing you should not have done. I was pretty sure there was a clear warning and it also said that you did not have to do the upgrade. But yes, if you updated the ZFS Feature Set, you cannot go back. I wish the warning was in big red flashing lights. If you don’t need a new feature (99.9999% of us home users do not), then do not update the ZFS feature set. I have not updated mine in years. That upgrade has nothing to do with SCALE in particular, it is just a ZFS update.

Feel free to sit and wait for other feedback (more educated in this matter than me) I don’t like to tag the HoneyBadger as I know he’s a busy man but he is particularly skilled in this area.

Perhaps search the forum as he has helped many others in a similar situation.

But if nothing else comes over the coming days/weeks you could perhaps try;

zpool import -f 9307007750265967810

Perhaps open a tmux session first and give it time.

I also am not the expert however… I’d give what @Johnny_Fartpants suggested first. If that fails, try this:
lsblk -o +PARTUUID,NAME,LABEL | grep "tank" | egrep -v "swap" | grep "part" | awk '{print $7}' | tail -n1

This should result in the gptid for this pool. You may be able to then import the pool this way zpool import -d /dev/gptid/YOUR_GPTID

The examples I saw on the internet had zpool import -d /dev/gptid/gptid/YOUR_GPTID (note the double gptid) but if yours does not start with gptid once you list it, I’d drop it. Add it if the first attempt fails. You may need to add -f, play it by ear.

Again, I’m not the expert.

You never said if you already had a backup available or not. I hope you do for safety sake.

1 Like

Came back home today and Truenas restarted and was stuck on

Job zfs-mount.service/start running (2h 19min 48s / no limit)

A separate message above

[FAILED] Failed to start ix-zfs.service - Import ZFS pools.

@Johnny_Fartpants Had to reinstall Truenas SCALE to boot up successfully. Trying your method, waiting to see if anything happens after a while.

@joeschmuck Yeah, my thought process was since it’s in a stable release it’s safe to upgrade the zfs with the big button there. Lesson learned. Also, do not have a backup. Family pictures are saved elsewhere but everything else were not.

Thankfully the family photos were saved. Hopefully the other stuff you can live without, well you may have to. And the ZFS Feature Set upgrade, I fell into that trap myself so I do speak from experience. Mine did not leave my data inaccessible but I did have to deal with an unstable system for awhile until I could fix it.

I did learn one important thing, have some sort of backup for you important data. I have a 5TB WD USB External Archive Drive. Periodically I will delete everything and then rewrite all the data. I say delete the data because this is an SMR drive which means a lot of rewriting if data exists on it beforehand. But if I only have a few GB’s of change, I will just let the drive do all rewriting work. If it is 200GB, time to erase the drive. It is faster this way. That is the price you pay to use an SMR drive, but 5TB in a small USB drive makes it very portable and easy to physically manage. I do have a 4TB USB NVMe drive, I am getting ready to start testing the transfer speed out soon. And I only need to backup less than 3TB of data (Important data is less than 500GB). Most of my NAS storage (10TB) is backups for my computers & laptops.

I wish you luck and hope you can gain access to your data soon. There are other folks much better at this part of TrueNAS/ZFS and hopefully someone else will chime in to provide the exact advice you need.

Looks like machine restarted overnight (presumably when zpool import -f failed) and was stuck again during boot, with the message

Job zfs-mount.service/start running (2h 19min 48s / no limit)

Did another fresh install, tried below, still no success

truenas_admin@truenas[~]$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   7.3T  0 disk 
└─sda1   8:1    0   7.3T  0 part 
sdb      8:16   0   7.3T  0 disk 
├─sdb1   8:17   0     2G  0 part 
└─sdb2   8:18   0   7.3T  0 part 
sdc      8:32   0   7.3T  0 disk 
└─sdc1   8:33   0   7.3T  0 part 
sdd      8:48   0   7.3T  0 disk 
├─sdd1   8:49   0     2G  0 part 
└─sdd2   8:50   0   7.3T  0 part 
sde      8:64   0 465.8G  0 disk 
├─sde1   8:65   0     1M  0 part 
├─sde2   8:66   0   512M  0 part 
└─sde3   8:67   0 465.3G  0 part 
truenas_admin@truenas[~]$ lsblk -o +PARTUUID,NAME,LABEL | grep "tank" | egrep -v "swap" | grep "part" | awk '{print $7}' | tail -n1
45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8
truenas_admin@truenas[~]$ zpool import -d /dev/gptid/45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8
zsh: command not found: zpool
truenas_admin@truenas[~]$ sudo su                                                        
[sudo] password for truenas_admin: 
root@truenas[/home/truenas_admin]# zpool import -d /dev/gptid/45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8 
no pools available to import
root@truenas[/home/truenas_admin]# zpool import -d /dev/gptid/gptid/45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8
no pools available to import
root@truenas[/home/truenas_admin]# zpool import -d -f /dev/gptid/45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8
cannot import '/dev/gptid/45c27aff-fc78-11ec-a7a0-bc5ff48ba3c8': no such pool available

Looking back at some of the screenshots I took with what I tried a few days ago, I originally had this error


Googling told me to find my mount point via
zfs get mountpoint tank
and then to change it via
zfs set mountpoint=/mnt/tank tank

Should I revert back to the old mount point? Maybe someone can tell me where/how I screwed up by trying the fix a wrong problem.

Making baby steps so far.

I did a zpool export tank, which allowed me to successfully import the pool via the GUI.

However, now the mountpoint is somehow changed to

root@truenas[/mnt/tank]# zfs get mountpoint tank
NAME  PROPERTY    VALUE          SOURCE
tank  mountpoint  /mnt/mnt/tank  local

Additionally, the dataset is not showing up inside secretdata

Even though the data is still viewable via shell

root@truenas[/mnt/mnt/tank/bigdata/secretdata]# ls
 3D          'Audiobooks old'   Magazines   Pictures           Sports             'Videos Misc'

Any help on how to proceed next is greatly appreciated.

Circling back to yesterday, reading into mountpoints, and my current understanding is that the mountpoint might be incorrect after the GUI import.

When I go to Shares it shows the old (correct?) mountpoint, which is /mnt/tank/bigdata/secretdata, not /mnt/mnt/tank... I cannot expand the triangle circled in red.

When I go to Dataset -> Permissions -> Edit I receive the following error:

Error details

Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 211, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1529, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1471, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 55, in nf
    res = f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/filesystem_/acl.py", line 515, in getacl
    raise CallError('Path not found.', errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Path not found.

I attempted to change mountpoint via CLI but unable to.

root@truenas[/home/truenas_admin]# zfs get mountpoint tank
NAME  PROPERTY    VALUE          SOURCE
tank  mountpoint  /mnt/mnt/tank  local
root@truenas[/home/truenas_admin]# zfs set mountpoint=/mnt/tank tank 
root@truenas[/home/truenas_admin]# zfs get mountpoint tank           
NAME  PROPERTY    VALUE          SOURCE
tank  mountpoint  /mnt/mnt/tank  local

Also tried to export the pool (in order to specify mountpoint upon re-import), but also not able to.

root@truenas[/home/truenas_admin]# zpool export -f tank
cannot unmount '/var/db/system/samba4': pool or dataset is busy

/
/
/
Edit:

Found the issue :tada:… with the command zfs set mountpoint=, /mnt/ is implied. All I had to do was zfs set mountpoint=/tank tank. Not sure where in the process the mountpoint got changed, but so far everything is good-ish as of now.

After days of stress, finally a sigh of relief.