I have a system running TrueNAS Core (TrueNAS-13.0-U4) with a 4x4TB raidz1 pool, and had a drive failure. I bought 4x16TB drives to replace (the failed drive has already been replaced!), but I want to move from raidz1 to raidz2 in that pool. I do realize this requires recreating the pool (and thus restoring from backup), and it’s this part I’m hoping to get a little handholding around - especially if there’s specific documentation around each piece that someone can point me to. The host in question has 72GB RAM and only four disk slots - so I can’t do it in-place
I’m hoping the data migration piece will be easy: I have another system on the same LAN (gigE connectivity between) which has ZFS with plenty space. This system is running NetBSD10 - and while I haven’t yet checked for ZFS feature parity, the ZFS on the TrueNAS box (the one i’m migrating) is fairly old (it was running FreeNAS 9 at one point) so I’m hoping I can do the backup with just “zfs send | ssh newhost zfs-restore” -small tests have proven OK so far. Is it safe to assume I can do the zfs restore in a path like tank/truenas/tank and have everything work out OK? What about the other direction, once I’ve recreated the zpool as raidz2? Do I need to worry about anything to put all the data back where it was?
If this was just data migration, I think i’d be fine - but I’ve also got a few virtual machines (and two jails!) running on this host, and I’d love suggestions for making sure they work correctly after the restore. Do i need to fully recreate everything? Is there any sane way to shut them down, remove the storage from under them, replace the storage, and have them JFW? I’ve backed them up separately but I’d sure love to not have to recreate them if necessary.
Thanks in advance for any pointers to relevant docs, or even better, walkthroughs of “I did something similar this way”, etc.
You shouldn’t have an issue with doing a zfs send/receave to different paths. I do it all the time for example locally, I send from one zpool to another for backup with a different path, such as tank1/someData/ to tank2/backup/someData/.
Also because services are using those paths, I normally disable said services first otherwise you’ll start to get busy errors. I sometimes do a zfs rename if I’m moving stuff around, and you’ll need to make sure the path/data isn’t in use. After restore to the original path, simple turn said services back on. I’m not sure about jails though, but VM’s are fine.
Its a shame you cannot attach the disks to the same system so you can do it all locally.
If you have alot of flat files, you can also do things like rsync or simple mv/cp to migrate. I tend to use a mixture of them all depending on the needs.
OK, this is potentially really helpful: if I understand you correctly, you’re saying that disabling the services (and VMs) before destroying the pool should be enough to allow them to be re-enabled seamlessly once the new pool has the paths back?
One specific other concern I have - should I do anything to handle the .system filesystems? It’s been a while since I looked, but they have some system-specific data in them - and (without understanding more fully exactly what’s in there) it seems like they’re either crucial to backup-and-restore, or crucial to NOT restore because they will contain conflicting info. Do you know which it is? (I suppose it could be important to merge old and new data as well, but that seems… complicated)
I do really appreciate the response. There’s been a delay getting the backup server ready, but I’m hoping to start next week.