I just installed a new hard drive in my TrueNAS Scale 24.10.1 server. I want to move a few of my datasets on my primary pool to a new pool with this new disk (It is a 1 disk stripe, but this is only for local storage. All data is safely backup up elsewhere). The old pool is encrypted at the top dataset level, and all the child datasets below it use the root encryption (so all are unlocked when the pool is unlocked). I tried to use a Replication Task to copy one dataset to the new pool (setup the same way), however I am getting all kinds of errors related to encryption. If I put inherit encryption in the replication task, it spits out an error saying I can’t encrypt twice. If I try to encrypt it with a new key, I get a re-encrypting source thats already encrypted not supported error message. If I don’t encrypt it at all, the dataset gets sent over correctly, but its locked. I can’t unlock it with the encryption key from the other pool or the new one.
This is a new pool for this drive, so is there a certain way I need to set it up for replicating datasets?
What is the correct way to do this? I have tried for almost 2 hours at this point to no prevail. Any help is appreciated, and I am able to completely restart the new pool creation if needed. Thanks.
Do you want the destination dataset to use the same exact encryption key as the one that currently lives on the primary pool?
If so, you need to manually type in the destination name into the text field. (E.g, secondpool/media). Do not try to “overwrite” the destination pool’s root dataset or overwrite a pre-created child dataset on the destination pool.
Combine this with “Full Filesystem Replication”, and you’ll have an exact replica, up to the latest snapshot that exists on the primary pool.
The caveat is that the destination dataset will remain locked, until you manually unlock it for the first time.
You will also need to unlock it at least once to be able to export a usable .json keyfile. To do this, you can copy and paste the keystring (found in the primary pool’s .json keyfile), which will unlock it. Then you can export the new keyfile from the destination pool, which will correctly refer to the new names.
@winnielinnie , thank you. I was able to get that to work perfectly. Once I unlocked the dataset as you described, I was able to make it take on the properties of the parent dataset to avoid confusion. Thanks again.
That’s a good point, but I’m 2 steps ahead of you. I’ve already confirmed that all the data is there, and that it’s unlocked and accessible like you said. I then exported the pool, physically disconnected and reconnected the drive (I have a hot swap bay from an old HP server), and confirmed that all is well. I also created a simple test SMB share to actually view a few of the files to make sure everything loads correctly. Thanks again for the help, and I appreciate the extra caution.
I tossed my backup server into the sea, physically destroyed the USB stick that holds my encryption keys, hired ex-Navy Seals to dive down to the ocean floor and plant explosive charges on the chassis, ran rm -rf / on my primary server for good measure, and now…
Listen. The point is, I lost all of my data. But I’m still way ahead of you.
This will not work if you plan to continue using it as a destination to receive incremental backups, since it will break the inheritance after each successful run. (The replication task will make the backup dataset its own “encryptionroot”, which means it will not inherit from its root dataset parent anymore.)
In fact, it will not even be able to receive a replication stream from the primary pool, unless it is unlocked.
EDIT: I could be reading your post incorrectly. I took it to mean you explicitly changed the dataset’s properties to inherit its parent’s encryption properties, after the replication finished.
It’s disappointing that you didn’t have those navy seals collect the debris and send it to space just to be sure……
Oh I didn’t think of that, thanks for letting me know. For this dataset, I only needed to run the job once since I am completely removing the data from the other pool, so it will be fine. There are 2 other datasets I’ll have to figure out something else for though.
Then there’s no issue. I had assumed you planned to use it as a regular backup destination from your primary pool’s dataset.
EDIT: It’s for reasons like this that it would be nice to have a “one-time dataset migration” tool in the GUI, distinct from the Replication Tasks. For the record, when I do one-time migrations, I actually bypass the GUI and use the command-line.
I actually considered using the command line for this, but I was worried about something going wrong since it’s encrypted. I’ll teach myself how to do that eventually lol.