Backup data, destroy pool, restore from backup - what is the best way?

Hi, I want to create my pool from scratch. For that purpose I want to rent a hetzner dedicated server (it’s some obscure config from server auction, but with 4×10tb which is enough for my data even in 4-way mirror), transfer all my data on that server, then I want to buy some new drives and create a new pool and restore my data

The question is: What is the best way to do this? Should I use rsync or replication or something else?

I currently have 4×4TB RAIDZ1 which was created using raidz expansion from 3×4TB. I want to add another 4TB drive and create pool from scratch to avoid problems with wrong space calculations and to avoid needing to rebalance my data

So long as the second system is running ZFS then you can’t beat replication. If not then rsync is the way.

The other option is to drop an external SAS HBA into your current system and acquire a JBOD. Pop your new disks in there and make your new pool. Then locally replicate from pool A to pool B. That would be a much faster process as replication is running locally and not bound by any network limitations.

1 Like

Yes, I’m planning to install TrueNAS on that server. Thank you

I’m only adding 1 new disk, other disks are old ones from current pool with data on them, that’s why I’m renting a server

1 Like

I hope you have a very fast upload and server! It will take a while to upload, by the time it’s done, probably out of date if you are still using the existing NAS. With replication, send it again (incremental) and that second one should be very fast and then you are up to date.

Thanks everyone for help. This is what I did:

Replication task is slow (20 MiB/s) and can’t be stopped without reboot or killing some processes via shell (there is no button in GUI, so this is what people suggest on the forum)

I tried Rsync. Slow as well, same 20 MiB/s

So I find solution to speed problem using Rclone. Set up my remote TrueNAS server as a “remote” in rclone and start copying files with this

rclone copy nas hetznertruenas:/mnt/hetzner/nas-rclone/ --transfers=12 -P
(running in tmux)

nas - my current local folder
hetzertruenas - name of my “remote” (set up using rclone config → SSH/SFTP)
transfers=12 - how much concurrent file transfers, this is key
P is for progress

Using this and utilizing multiple file transfers at once I was able to achieve 80 MiB/s (4 times as much as Replication or Rsync)

So I was able to transfer 1/4 of my NAS (2.4 TiB) to remote server in 9 hours, about 27 hours to go to fully transfer ~9.5 TiB

After transfer complete I’ll check if everything is transfered properly (probably run rsync or something)

There is no active applications on my TrueNAS so files becoming out-of-date is not a concern

Forgot to mention when replicating within a trusted environment you can use NETCAT to speed up replication substantially.

This is not a trusted environment, unfortunately. I’m copying files over the internet to a remote server

But I still tried this just to see if there was any difference. It was better, but not by much, about 30–35 MiB/s (while rclone with transfers=12 gives me about 80 MiB/s)

I have a gigabit connection on this server and 800 Mbps at home. My home connection seems to be fully utilized, so I think this is a good result