The easiest[1] solution would be hook up both systems to the same switch and replicate your data to the new NAS with a remote replication task:
Carefully read over the possible settings, especially you’d want to pay attention to full filesystem replication and set readonly to ignore.
Have a look at the page and familiarize yourself with the process. Maybe create test dataset on the old system and replicate that and see how it goes.
When setting up the snapshots and replication tasks, I prefer to do it per dataset and not just replicate the whole pool at once.
The quickest solution would be to just swap out the hard drives and import the pool in the new system, but then you’d miss out on the upgrade (5x4 TB to 6x6 TB). ↩︎
The replication ran for about 3 hours (about 5%) then failed with…
Full ZFS replication failed to transfer all the children of the snapshot homeArchive@blahblahblah. The error was: cannot unmount ‘/var/db/system/syslog-blahblahblah’: pool or dataset busy Broken pipe.
My System Dataset is located on: homeArchive.
Underneath that, I have several other datasets in a folder structure. However, my Periodic Snapshot Task takes a snapshot of homeArchive (with recursion), not of the child datasets directly.
Do I need to make sure there are no other tasks running before I can replicate?
Cheers,
Edit – won’t a Full System Pull delete my system dataset and my user information?
Edit 2 – I think I’m on the right track. I ran a new set of Snapshots on the old server last night and then modified the Replication task this morning. It’s about 65% complete.
The new Replication task replicates datasets below my root folder (i.e., not including the System Dataset). When I tried to do this before I ran into an error. Even though I had created Periodic Snapshots, the status indicated “Pending” rather than “Finished”. Consequently, the Replication task said it couldn’t find the snapshots. But today, after creating new snapshots last night, Replication seems to be working.
Edit 3 – Replication seems to have worked. I’ll mark this thread solved after I have had a chance to test the datasets and verify they actually contain data.
I have a third TruNAS server, which I call my “Long-Term Backup,” to which I replicate my data every six months. I keep this machine in a room safe from fire and power it down normally.
When I power it up, I create a new dataset, calling it something like “LT_Stor_06_2024.” When I create the replication task, I select the sub-datasets listed below the primary dataset (check boxes) on my main server to replicate from, as they have a current snapshot from which to pull. I set it as a “Run One Time” recursive task and select the new dataset on the target system.
As a result of this process, a complete pull of data occurs. This includes the Sub Datasets and all their folders on that storage array, ensuring a comprehensive data replication.
This has worked well for me, I am currently in my fourth cycle of following this process. The Long Term Target System is a simple JBOD of large disks. If I had a fire or other disaster, HOPEFULLY my data would still be available and MY data is not shared on a server array somewhere in the world where I have no control of prying eyes. Can you say Air Gap!