Migrate to New Hardware

I have completed the burn-in of a new [core] NAS.

I would like to migrate my data from the old NAS to the new NAS, and then repurpose the old NAS as a remote backup machine.

I’m sure there is a guide on how to do this, but I haven’t been able to find it. Can someone please provide a link of resources for me to read.

Important differences between the hardware:

  • Number of hard drives
  • Size of hard drives
  • Operating system

Release: TrueNAS-12.0-U8.1
Case: Dell T-310
Socket: LGA1156
Chipset: Intel 3420
CPU: Xeon X3450 / 2.66 Ghz, 8M Cache, Turbo, HT
RAM: 32 GB (4x8GB) DDR3-1066 (PC3-8500) ECC Registered DIMM
SAS: Dell H310, 6Gbps SAS HBA w/ LSI 9211-8i P20 IT Mode
SAS Backplane: Dell N621K
Power Supply: Dell, non-redundant 375W
Storage Pool: RAID-Z2 - 5x4TB WD Red Pro WD4003FFBX, 7200 rpm
Boot Pool: Mirrored - 2x120 GB Crucial 2.5" Internal SSD
UPS: none
Built: 2020-08
Status: onsite, powered 24/7

Release: TrueNAS-13.0-U6.1
Case: Fractal Define R5
Motherboard: Supermicro, MBD-X11SSM-F Micro ATX
Socket: LGA1151
Chipset: Intel C236
CPU: Xeon E3-1230 v6 3.5GHz
RAM: 32 GB (2x16GB) DDR4-2400 (PC4-19200) ECC UDIMM
SAS: none
Power Supply: Seasonic Prime PX-500 80Plus Platnium 500W
Storage Pool: RAID-Z2 - 6x6TB WD Red Plus WD60EFPX, 5400 rpm
Boot Pool: Mirrored - 2x32GB Supermicro SATA DOM
UPS: none
Built: 2024-01
Status: testing complete


The easiest[1] solution would be hook up both systems to the same switch and replicate your data to the new NAS with a remote replication task:

Carefully read over the possible settings, especially you’d want to pay attention to full filesystem replication and set readonly to ignore.

Have a look at the page and familiarize yourself with the process. Maybe create test dataset on the old system and replicate that and see how it goes.

When setting up the snapshots and replication tasks, I prefer to do it per dataset and not just replicate the whole pool at once.

  1. The quickest solution would be to just swap out the hard drives and import the pool in the new system, but then you’d miss out on the upgrade (5x4 TB to 6x6 TB). ↩︎


Exactly what I was looking for. Thank you!

1 Like

I’ve been running into problems.

I set up a Replication task to:

  • Pull from the old server to the new server
  • “(Almost) Full Filesystem Replication”
  • “Destination Dataset Read-Only Policy” = Ignore
  • “Replication from Scratch”

The replication ran for about 3 hours (about 5%) then failed with…

Full ZFS replication failed to transfer all the children of the snapshot homeArchive@blahblahblah. The error was: cannot unmount ‘/var/db/system/syslog-blahblahblah’: pool or dataset busy Broken pipe.

My System Dataset is located on: homeArchive.

Underneath that, I have several other datasets in a folder structure. However, my Periodic Snapshot Task takes a snapshot of homeArchive (with recursion), not of the child datasets directly.

Do I need to make sure there are no other tasks running before I can replicate?


Edit – won’t a Full System Pull delete my system dataset and my user information?

Edit 2 – I think I’m on the right track. I ran a new set of Snapshots on the old server last night and then modified the Replication task this morning. It’s about 65% complete.

The new Replication task replicates datasets below my root folder (i.e., not including the System Dataset). When I tried to do this before I ran into an error. Even though I had created Periodic Snapshots, the status indicated “Pending” rather than “Finished”. Consequently, the Replication task said it couldn’t find the snapshots. But today, after creating new snapshots last night, Replication seems to be working.

Edit 3 – Replication seems to have worked. I’ll mark this thread solved after I have had a chance to test the datasets and verify they actually contain data.

I have a third TruNAS server, which I call my “Long-Term Backup,” to which I replicate my data every six months. I keep this machine in a room safe from fire and power it down normally.

When I power it up, I create a new dataset, calling it something like “LT_Stor_06_2024.” When I create the replication task, I select the sub-datasets listed below the primary dataset (check boxes) on my main server to replicate from, as they have a current snapshot from which to pull. I set it as a “Run One Time” recursive task and select the new dataset on the target system.

As a result of this process, a complete pull of data occurs. This includes the Sub Datasets and all their folders on that storage array, ensuring a comprehensive data replication.

This has worked well for me, I am currently in my fourth cycle of following this process. The Long Term Target System is a simple JBOD of large disks. If I had a fire or other disaster, HOPEFULLY my data would still be available and MY data is not shared on a server array somewhere in the world where I have no control of prying eyes. Can you say Air Gap!

Happy Trails