Encrypted storage pool Core to Scale?

I have Core on a system I inherited. It is running 13.0.-U6.3, and I understand that I need to Migrate the encrypted pool to a new pool before migration to Scale, but I don’t see anything in the docs or anything I read to tell me how. Is it a matter of just creating a new pool, and copying the data to the new pool first. Then erase or replace the drives in the Scale setup and reload it. Is there another way to un-encrypt the pool? My system stats are for the TrueNAS Office, below. Any help is greatly appreciated. Thanks in advance.

Does the pool say “Legacy Encryption” in the Storage → Pools screen?

Thanks for your reply, Yes it does.

Just to confirm, since another user has mixed messages about whether or not they’re using GELI (legacy) encryption:

geli status
1 Like

Here it is

GELI STAT

It appears you are indeed using GELI encryption.

What is the layout of your pool? How full is it?

zpool status <nameofpool>
zpool list <nameofpool>
zfs list -r -t filesystem <nameofpool>

Do you have extra drives that you were going to use to “grow” your pool, anyways?

It’s also better to use SSH to login to the server and run the commands, which allows you to copy-paste the results in here, using “preformatted” text. This can be done with the </> button at the top of a new post.

2 Likes

I had never used SSH, so it took me a minute, but here is the information below.I do not have extra drives in the system, but I do have a drive that is pretty much a backup of the data as another pool already on the system.

% zpool status APWU_Storage
pool: APWU_Storage
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using ‘zpool upgrade’. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:18:05 with 0 errors on Sun Oct 6 00:18:08 2024
config:

    NAME                                                STATE     READ WRITE CKSUM
    APWU_Storage                                        ONLINE       0     0     0
      raidz1-0                                          ONLINE       0     0     0
        gptid/8c36c31b-a33e-11ed-905f-509a4c682177.eli  ONLINE       0     0     0
        gptid/ea3e7b28-a338-11ed-9317-509a4c682177.eli  ONLINE       0     0     0
        gptid/a4a06172-a333-11ed-9fe6-509a4c682177.eli  ONLINE       0     0     0
        gptid/cde05077-a32d-11ed-9b0c-509a4c682177.eli  ONLINE       0     0     0

errors: No known data errors
% zpool list APWU_Storage
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
APWU_Storage 14.5T 353G 14.2T - - 0% 2% 1.00x ONLINE /mnt
% zpool list -r -t filesystem APWU_Storage
invalid option ‘r’
usage:
list [-gHLpPv] [-o property[,…]] [-T d|u] [pool] …
[interval [count]]

the following properties are supported:

    PROPERTY             EDIT   VALUES

    allocated              NO   <size>
    capacity               NO   <size>
    checkpoint             NO   <size>
    dedupratio             NO   <1.00x or higher if deduped>
    expandsize             NO   <size>
    fragmentation          NO   <percent>
    free                   NO   <size>
    freeing                NO   <size>
    guid                   NO   <guid>
    health                 NO   <state>
    leaked                 NO   <size>
    load_guid              NO   <load_guid>
    size                   NO   <size>
    altroot               YES   <path>
    ashift                YES   <ashift, 9-16, or 0=default>
    autoexpand            YES   on | off
    autoreplace           YES   on | off
    autotrim              YES   on | off
    bootfs                YES   <filesystem>
    cachefile             YES   <file> | none
    comment               YES   <comment-string>
    compatibility         YES   <file[,file...]> | off | legacy
    delegation            YES   on | off
    failmode              YES   wait | continue | panic
    listsnapshots         YES   on | off
    multihost             YES   on | off
    readonly              YES   on | off
    version               YES   <version>
    feature@...           YES   disabled | enabled | active

The feature@ properties must be appended with a feature name.
See zpool-features(7).

You’re definitely using GELI encryption for sure.


Only 353 GiB used? If you can replicate that data to another pool, this will allow you to rebuild your new (non-GELI) pool in SCALE.

The last command is zfs, not zpool. It will show you the dataset hierarchy.


Are you saying that this other pool is a 100%, up-do-date replica of the “APWU_Storage” pool?

1 Like

I was reading from another computer, here is the zls info. I have all the data copied to the replica pool, with the exception of the /.system , /iocage and /jails
I wanted to make sure I had all the data backed up before I started the SCALE upgrade. I did the upgrade and thought I had the APWU_Storage pool unencoded. Obviously, I didn’t, so I did a clean restore of CORE, since I will be out of the office and not return before the office staff needed the server storage.

So after reading your info and looking a more on the forum, I guess my only real question is can I decrypt the original pool and the upgrade to SCALE and not have to move a bunch of the data, or do I erase the drives, install the pool anew, and restore the data from the backup pool. BTW, the backup pool is a single 2TiB drive only for the backup of the data since we are well below to original pool size of the RAIDZ1.

% zfs list -r -t filesystem APWU_Storage
NAME USED AVAIL REFER MOUNTPOINT
APWU_Storage 257G 10.1T 48.6G /mnt/APWU_Storage
APWU_Storage/.system 2.40G 10.1T 7.08M legacy
APWU_Storage/.system/configs-76c11d7f8a944b3d8e42fe35420dbaa3 320M 10.1T 315M legacy
APWU_Storage/.system/cores 291K 1024M 128K legacy
APWU_Storage/.system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3 1.20G 10.1T 41.6M legacy
APWU_Storage/.system/samba4 10.7M 10.1T 668K legacy
APWU_Storage/.system/services 140K 10.1T 140K legacy
APWU_Storage/.system/syslog-76c11d7f8a944b3d8e42fe35420dbaa3 25.4M 10.1T 6.18M legacy
APWU_Storage/.system/webui 128K 10.1T 128K legacy
APWU_Storage/File Storage 6.60G 10.1T 6.60G /mnt/APWU_Storage/File Storage
APWU_Storage/Filemaker_Pro 125G 10.1T 124G /mnt/APWU_Storage/Filemaker_Pro
APWU_Storage/Personel 73.0G 10.1T 3.06G /mnt/APWU_Storage/Personel
APWU_Storage/Personel/APackwood 302K 10.1T 140K /mnt/APWU_Storage/Personel/APackwood
APWU_Storage/Personel/DonPack 546K 10.1T 302K /mnt/APWU_Storage/Personel/DonPack
APWU_Storage/Personel/KHolt 29.7G 10.1T 29.7G /mnt/APWU_Storage/Personel/KHolt
APWU_Storage/Personel/PGregory 11.9G 10.1T 11.9G /mnt/APWU_Storage/Personel/PGregory
APWU_Storage/Personel/President 2.47G 10.1T 2.47G /mnt/APWU_Storage/Personel/President
APWU_Storage/Personel/Secretary Treasurer 18.5G 10.1T 18.5G /mnt/APWU_Storage/Personel/Secretary Treasurer
APWU_Storage/Personel/TOldham 221K 10.1T 140K /mnt/APWU_Storage/Personel/TOldham
APWU_Storage/Personel/dpackwood 7.35G 10.1T 7.35G /mnt/APWU_Storage/Personel/dpackwood
APWU_Storage/Personel/shared 308K 10.1T 134K /mnt/APWU_Storage/Personel/shared
APWU_Storage/iocage 6.31M 10.1T 4.23M /mnt/APWU_Storage/iocage
APWU_Storage/iocage/download 302K 10.1T 128K /mnt/APWU_Storage/iocage/download
APWU_Storage/iocage/images 302K 10.1T 128K /mnt/APWU_Storage/iocage/images
APWU_Storage/iocage/jails 302K 10.1T 128K /mnt/APWU_Storage/iocage/jails
APWU_Storage/iocage/log 302K 10.1T 128K /mnt/APWU_Storage/iocage/log
APWU_Storage/iocage/releases 302K 10.1T 128K /mnt/APWU_Storage/iocage/releases
APWU_Storage/iocage/templates 302K 10.1T 128K /mnt/APWU_Storage/iocage/templates
APWU_Storage/jails 605M 10.1T 140K /mnt/APWU_Storage/jails
APWU_Storage/jails/.warden-template-pluginjail-11.0-x64 605M 10.1T 590M /mnt/APWU_Storage/jails/.warden-template-pluginjail-11.0-x64

SOLVED!! II have moved over to SCALE, Recovered the former encrypted pool drives, exported/disconnected the pool, and created a new pool with the drives.
Set the pool up, moved the files from the backup, updated the ACL’s, Users, Groups, etc. Anyway, it is working just fine, thanks for all the help and advise.