Simulating a complete loss of both drives and server system. Unable to restore from Replicated data

So i have put 2x 6tb ironwolf drives in mirror for my main pool called nas.

I put in a pendrive to test full destruction of pool and server and restore it from my replication task to said pendrive.

after making a replication, i export a config file. and export / destroy my nas pool. and reset to default from general settings.

after reboot i restore config, and get ghost pool of nas which i cant delete as all my tasks will be deleted along with it.

if i restore data first with replication and then restore config. i get my nas pool in exact same state and now 2 drives being shown as exported pools.

basically i am not able to get both my data and pool related tasks restored. its one or the other. what is the right way to do this? what am i doing wrong here?

just FYI, i am using the same 2 drives, just wiping them so i dont get what the issue is

Just to be sure: in which exact same state?
Also: how did you replicate to a USB drive? did you manually create a zfs stripe on it, mount it and then replicated?

This all sounds a bit weird to me.

In any case if my system actually died completely and I had a full pool replica available I’d probably reinstall the machine, create a new pool with the old name, restore the data backup and then restore the config backup. This would still only work if the zfs pool and datasets would match exactly what the backup expects though (as far as I understand).

Hi colin,

Thanks for your reply in this matter.

Allow me to clarify this further.

#1. Created pool nas and added users, groups, datasets, permissions, configured ups, services, network etc. All this one on 2x mirrored 6tb hdd

#2. Insert 64gb pendrive. Usb 3.0. Wipe disk and make zfs pool with stripe and name it nas_bkp. Use replication task to copy nas to nas_bkp

#3. Export config file.

#4. Delete pool nas and destroy data

#5. Reset config to default. (To get fresh install like config)

#6. Make new pool nas. Same name for easy restore.

#7. Make new replication task from nas_bkp to nas.

#8. Checks all data is present. Permissions present.

#9. Restores config.

#10. Pool nas turn offline. Drives are not connected. Nas created in #6 is disconnected and shown as exported.

#11. All tasks and schedules are back. But data is gone

I think that’s because zfs pools have an internal id besides the name and your config most probably references that.

I think if you import the then exported pool you should be fine though as most things (shares and so on) care more about paths than the zfs pool id. I am absolutely not sure about replication tasks, scrubs and so on though.

I find it so mind bending that there is no offical documentation on how to do a full disaster recovery? Like how are people running truenas and not considering how to restore there systems 1:1?

I could not find a single video on this topic anywhere on YouTube

just take it as a sign of how reliable truenas is :wink:

Dang it.

You nailed that point home with that. Dead center !

Thanks man. I will just manually fill the tasks and schedules if it every comes to that then. Given how unlikely it is