Renaming, importing and specifying mount point for pool replacement

Hi,

As I need to replace a legacy encypted pool with an OpenZFS encrypted one I need to rename the newly created one. Additionally, the mount point needs to be changed.

Looking at the information I can find in this forum I came across this post which lists the necessary steps: Problem with pool renaming and /mnt - #4 by Farout
Within the thread it says:

From what I understand the Manual Page says regarding the altroot property:

altroot is not a persistent property.

Therefore, if the changes should be persistent even after a reboot I should use

zpool import -R /mnt newpoolname

which will use the previously created new poolname (from zpool export poolname) and mount it so that it become available at /mnt/poolname.

Is that correct?

zpool import -o altroot=/mnt <oldname> <newname>
zpool export <newname>

Then import from the UI which will take care of everything.

2 Likes

Note that many configuration fields (such as export / share paths) will need to be changed to point to the new pool

1 Like

Hi pmh,

I seem to have an issue here.

Via SSH I tried:

  1. Exporting the pool core via UI
  2. zpool import -o altroot=/mnt core dyingcore which spawns cannot import 'core': no such pool available

I also tried step 2. without exporting the pool as described in step 1 which also fails with the same message.

What am I doing wrong here?

What does “zpool status” say?

Hi @NugentS

saturn% zpool status
  pool: boot-pool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
  scan: scrub repaired 0B in 00:00:04 with 0 errors on Thu Apr 24 03:45:04 2025
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            da0p2   UNAVAIL      0     0     0  cannot open
            nvd0p2  ONLINE       0     0     0

errors: No known data errors

  pool: core
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 03:31:43 with 0 errors on Tue Apr 15 10:31:43 2025
config:

        NAME                                                STATE     READ WRITE CKSUM
        core                                                ONLINE       0     0     0
          raidz2-0                                          ONLINE       0     0     0
            gptid/1.eli  ONLINE       0     0     0
            gptid/8.eli  ONLINE       0     0     0
            gptid/13.eli  ONLINE       0     0     0
            gptid/b.eli  ONLINE       0     0     0

errors: No known data errors

  pool: core2
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        core2                                           ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/id1  ONLINE       0     0     0
            gptid/id2  ONLINE       0     0     0
            gptid/id3  ONLINE       0     0     0
            gptid/id4  ONLINE       0     0     0

errors: No known data errors

  pool: jail
 state: ONLINE
  scan: scrub repaired 0B in 00:00:21 with 0 errors on Tue Apr  1 00:00:21 2025
config:

        NAME        STATE     READ WRITE CKSUM
        jail        ONLINE       0     0     0
          nvd0p3    ONLINE       0     0     0

errors: No known data errors

It’s already imported. You need to export from the UI first. If that fails for a reason there should be a message that you can post here.

Here is what I do:

saturn% zpool status core
  pool: core
 state: ONLINE

saturn% zpool status core
cannot open 'core': no such pool
saturn% zpool import -o altroot=/mnt core dyingcore
cannot import 'core': no such pool available

What does zpool import without any arguments result in?

saturn% sudo zpool import
Password:
   pool: jail
     id: 16759351975472055591
  state: UNAVAIL
status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
 config:

        jail        UNAVAIL  insufficient replicas
          nvd0      UNAVAIL  cannot open

does it matter that core is GELI encrypted? Importing it via UI works perfectly fine fyi.

Of course it does! You need to activate/unlock the GELI devices before you can see the pool. How that is done on the command line is left as an exercise to the reader :wink:

Sorry, no time to try and find out right now. If nobody else steps in or or you find out yourself, I can possibly help tomorrow.

Hey pmh :slight_smile:

No problem, I appreciate you being so helpful already. Thank you for encouraging me. The following has been achieved so far:

  1. I exported pool core via UI as stated previously successfully.

  2. A sudo zpool import did not list pool core

  3. I read a bit about GELI encryption and found out that by sudo geli attach -p -k pool_core_encryption.key /dev/gptid/gptid I can activate every device that is part of the pool (note for anyone who might need this: without using the -p argument you will be prompted for the passphrase. I did not set one and even without entering it I would not be able to access the drive. Therefore, use -p no neglect the passphrase part of the key)

  4. Now I have active drives:

saturn% geli status
            Name  Status  Components
mirror/swap0.eli  ACTIVE  mirror/swap0

   gptid/id1.eli  ACTIVE  gptid/id1
   gptid/id2.eli  ACTIVE  gptid/id2
   gptid/id3.eli  ACTIVE  gptid/id3
   gptid/id4.eli  ACTIVE  gptid/id4
saturn% sudo zpool import
   pool: jail
     id: 16759351975472055591
  state: UNAVAIL
status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
 config:

        jail        UNAVAIL  insufficient replicas
          nvd0      UNAVAIL  cannot open

   pool: core
     id: 8982781163759361897
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        core               ONLINE
          raidz2-0         ONLINE
            gptid/id1.eli  ONLINE
            gptid/id2.eli  ONLINE
            gptid/id3.eli  ONLINE
            gptid/id4.eli  ONLINE

  1. Now following @pmh revised zpool import commands which work flawlessly:
saturn% sudo zpool import -o altroot=/mnt core dyingcore
saturn% sudo zpool export dyingcore

  1. Going back to the UI I select to import a pool but do not select to decrypt the disks - as I already did so in step 3.

  2. At step 3 I select dyingcore from the drop down menu

  3. The pool dyingcore is now fully imported and operational:

saturn% zpool list dyingcore
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
dyingcore  14.5T  5.95T  8.55T        -         -     1%    41%  1.00x    ONLINE  /mnt

saturn% zpool status dyingcore
  pool: dyingcore
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 03:31:43 with 0 errors on Tue Apr 15 10:31:43 2025
config:

        NAME                                                STATE     READ WRITE CKSUM
        dyingcore                                           ONLINE       0     0     0
          raidz2-0                                          ONLINE       0     0     0
            gptid/id1.eli  ONLINE       0     0     0
            gptid/id2.eli  ONLINE       0     0     0
            gptid/id3.eli  ONLINE       0     0     0
            gptid/id4.eli  ONLINE       0     0     0

errors: No known data errors

Great success!! Thank you @pmh :smiley: :love_you_gesture: