GELI Pool Migrations Advice & Scale

Hi after a bit of advice to check I am on the right track around migrating my existing pool form GELI and moving to Scale.

I currently have one GELI encrypted pool with 4 disks using Active directory, my plan is:

  1. Create a new pool in truenas core with zfs, then using rsync copy the data from the old pool.

Not too worries about the ACLs, will just strip them

  1. Check the data is there ok after a reboot.

  2. disconnected the original GELI pool physically and the boot disk for the truenas core

  3. On a new boot disk install true Nas scale, import the unencrypted pool and check everything.

  4. eventually reconnect the old encrypted pool and wipe the disks before using them to create a new spare pool in scale.

Is there anything I am missing? Any reason this would not.work or is there a better way to do this?

Thanks
Mike

Are you hoping to use native ZFS encryption on SCALE?

What is your dataset layout/hierarchy right now? Did you ever save files directly in the root dataset?

Why not use ZFS replication to transfer everything to the new pool? This will retain everything, including filesystems, snapshots, and ACLs.

1 Like

Hi

Maybe I have miss understood something I thought the replication wizard need to be on two different servers or have I completely miss under stood?

Or do you mean use zfs send and receive? Thanks

You set run a one-time manual Replication Task from one local pool to another. Just choose “Local” in the options.

It’s also possible to use the command-line, but it requires some more knowledge.

What about the other questions?

1 Like

Hi

Thanks for that, Nothing saved in the root data set

Structure is

Pool(system dataset)
→ dataset1
→ iocage
→ dataset2
→ dataset3
→ dataset4

Thanks

Mike

You can do a recursive replication for each parent dataset, to nest them under the new pool’s root dataset.

OldPool/dataset1 → NewPool/dataset1
OldPool/dataset1 → NewPool/dataset2
OldPool/dataset1 → NewPool/dataset3
OldPool/dataset1 → NewPool/dataset4

In the Replication Task, you manually write the destination dataset for the “target” in the text field. It will be created under the new pool’s root dataset the first time the replication is run.

Thanks for all the help :slight_smile:

If anything is in the root of the dataset what sort of issues does that cause?

Prior to some time around Core’s 12.0 release, there was no safeguard against messing with the root dataset’s permissions. (Currently, you’ll notice that such options are grayed out if you attempt to do so with the GUI.)

Non-root users are no longer allowed write access to the root dataset.

If you try to copy from one root dataset to another, and begin to use this new system, you might find that you cannot even delete these existing files via an SMB share.

Besides that, it’s good practice to treat the root dataset as a glorified placeholder for default dataset (inheritable) options, rather than a dataset to store files. When it comes to encrypted root datasets, you cannot even replicate one into the other. You must nest it at least one spot below the target pool’s root dataset.

So just don’t use the root dataset for any file storage whatsoever.

2 Likes