Guidance to move from raidz1 to raidz2 pool?

I have a system running TrueNAS Core (TrueNAS-13.0-U4) with a 4x4TB raidz1 pool, and had a drive failure. I bought 4x16TB drives to replace (the failed drive has already been replaced!), but I want to move from raidz1 to raidz2 in that pool. I do realize this requires recreating the pool (and thus restoring from backup), and it’s this part I’m hoping to get a little handholding around - especially if there’s specific documentation around each piece that someone can point me to. The host in question has 72GB RAM and only four disk slots - so I can’t do it in-place :frowning:

I’m hoping the data migration piece will be easy: I have another system on the same LAN (gigE connectivity between) which has ZFS with plenty space. This system is running NetBSD10 - and while I haven’t yet checked for ZFS feature parity, the ZFS on the TrueNAS box (the one i’m migrating) is fairly old (it was running FreeNAS 9 at one point) so I’m hoping I can do the backup with just “zfs send | ssh newhost zfs-restore” -small tests have proven OK so far. Is it safe to assume I can do the zfs restore in a path like tank/truenas/tank and have everything work out OK? What about the other direction, once I’ve recreated the zpool as raidz2? Do I need to worry about anything to put all the data back where it was?

If this was just data migration, I think i’d be fine - but I’ve also got a few virtual machines (and two jails!) running on this host, and I’d love suggestions for making sure they work correctly after the restore. Do i need to fully recreate everything? Is there any sane way to shut them down, remove the storage from under them, replace the storage, and have them JFW? I’ve backed them up separately but I’d sure love to not have to recreate them if necessary.

Thanks in advance for any pointers to relevant docs, or even better, walkthroughs of “I did something similar this way”, etc.

+j

You shouldn’t have an issue with doing a zfs send/receave to different paths. I do it all the time for example locally, I send from one zpool to another for backup with a different path, such as tank1/someData/ to tank2/backup/someData/.

Also because services are using those paths, I normally disable said services first otherwise you’ll start to get busy errors. I sometimes do a zfs rename if I’m moving stuff around, and you’ll need to make sure the path/data isn’t in use. After restore to the original path, simple turn said services back on. I’m not sure about jails though, but VM’s are fine.

Its a shame you cannot attach the disks to the same system so you can do it all locally.

If you have alot of flat files, you can also do things like rsync or simple mv/cp to migrate. I tend to use a mixture of them all depending on the needs.

Hope this helps after 3 days of no response.

OK, this is potentially really helpful: if I understand you correctly, you’re saying that disabling the services (and VMs) before destroying the pool should be enough to allow them to be re-enabled seamlessly once the new pool has the paths back?

One specific other concern I have - should I do anything to handle the .system filesystems? It’s been a while since I looked, but they have some system-specific data in them - and (without understanding more fully exactly what’s in there) it seems like they’re either crucial to backup-and-restore, or crucial to NOT restore because they will contain conflicting info. Do you know which it is? :slight_smile: (I suppose it could be important to merge old and new data as well, but that seems… complicated)

I do really appreciate the response. There’s been a delay getting the backup server ready, but I’m hoping to start next week.

+j

I would suggest you do a quick test beforehand with zfs rename, so you can prove it yourself.

I havn’t seen a .system folder, but there is a .zfs which gives access to the snapshots. Where is this .system?

That’s a good point about testing with zfs rename - I’ll do that.

Looks like maybe .system is a holdover from ancient FreeNAS or something, they’re not mounted and all marked “legacy”:

tank/.system                                                                  1.46G  1.12T      853M  legacy
tank/.system/configs-b197db98fe704f2784c82cb4a0d0471c                         6.19M  1.12T     5.96M  legacy
tank/.system/configs-e2eccb3703ad46d2b19f2e4809443384                          371M  1.12T      371M  legacy
tank/.system/cores                                                            1.04M  1023M      140K  legacy
tank/.system/rrd-b197db98fe704f2784c82cb4a0d0471c                              163K  1.12T      140K  legacy
tank/.system/rrd-e2eccb3703ad46d2b19f2e4809443384                              123M  1.12T      123M  legacy
tank/.system/samba4                                                            686K  1.12T      355K  legacy
tank/.system/services                                                          140K  1.12T      140K  legacy
tank/.system/syslog-e2eccb3703ad46d2b19f2e4809443384                          14.3M  1.12T     14.3M  legacy
tank/.system/webui                                                             128K  1.12T      128K  legacy

Oh, is this the system dataset pool. Is the default now boot-pool?

Its under system > advanced > storage

Edit: looks like it


Legacy in this regard means legacy file system.

Just move the system dataset to your boot pool. Move it back to your new pool when finished.

1 Like