How to create a 4xdrive RAIDZ1 pool from a 1xdrive Stripe pool?

Hear me out: I know that’s not technically possible.

I am currently running Plex and *arr stack apps on my TrueNAS Scale server. I set the TrueNAS config up when I had only one 8TB disk-- basically storing everything in my single-disk zpool (tank) with the intention to later expand that. More recently, I bought 3x8TB factory recertified drives with the intention of creating a RAIDZ1 pool, somehow migrating the data from ‘tank’ to my new pool, wiping the old drive, and adding said drive into my RAIDZ1 pool to finally create a 4x 8TB RAIDZ1 pool.

The problem, as I understand it, is that I would:

  1. Have to (successfully) migrate approximately 5.7TB of data into my new pool (It has failed on me before, and takes like, a full day, if not longer)
  2. After re-adding the old drive, complete a full resilvering of all the data I’d just migrated.

This sounds really annoying.

My alternative plan would be to transfer all the data to some kind of intermediary drive, wipe the source drive, create the Z1 pool (for the first time) with all four drives, and migrate the data from the intermediary to my new pool.

That sounds considerably less time consuming and vaguely easier on the disks. Only problem with the second option is that I don’t have ready access to a drive that big, otherwise I’d probably just use it to create the new pool, lol.

Last time when migrating the data (one of my new drives was faulty, sent it back, got a new one), I tried to use migration tasks to move things cleanly. It sure did move things, but they weren’t nested in the proper datasets in the destination.

Then I tried to use Rsync, which was even more of a mess. I think the problem with my initial migration task is that it didn’t want me to do a full disk migration, something about a TrueNAS config dataset that was hidden and didn’t have a snapshot. I instead created two separate tasks for each dataset: ‘data’ and ‘configs’. This effectively just deposited my files in the root dataset. Which was not what I wanted.

Firstly: how should I go about migrating the data? What tool should I use? What settings might I have misconfigured initially?

Secondly: What method of creating this frankensteined Z1 Pool should I use?

System Specs:
OS Version:TrueNAS-SCALE-24.10.2.2
Product:OptiPlex 9020
Model:Intel(R) Core™ i7-4770 CPU @ 3.40GHz
Memory:16 GiB

Your choice is between:

  • Creating a 3-wide raidz1 with the new drives, migrating, and then expanding to 4-wide with the old drive; and
  • Creating a 4-wide raidz1 with the new drives and a sparse file, offlining the sparse file, migrating to the (degraded) pool, and then using the old drive to replace the missing sparse file.

Option 2 involves the CLI, and has no redundancy during transfer, but ends up in a cleaner situation, with proper space reporting and proper spreading of data chunks.
Option 1 avoids the CLI, but leaves the chucks distributed in 3-wide stripes in a 4-wide pool.
Both options involve significant activity after adding the fourth disk, so forget about the “annoyance”: You cannot escape it! (Option 1 is the worst if you add a further rebalance, or future “zfs rewrite” to align on-disk structures with the actual width of the vdev.)

This is impossible to answer without knowing in detail how your data is structured and how you defined the tasks the first time.

3 Likes

Thank you so much for your response. I’m leaning towards the second option— how would I go about making a sparse file?

Fair question… There was a resource to create degraded raidz vdevs on CORE.
truncate is present in SCALE and works the same to create the sparse file.
As for partitions, an update would be in order to reproduce what SCALE is now doing.

To copy date from old pool to new pool use replication.

Thank you all so much for your advice! I ended up buying another 8TB just to save a lot of headache and drive stress, and after a fair bit of migration time, it’s all working perfectly now! To be clear, I was a little scared to mess around with sparse files haha, so I figured it was best to keep it simple.

1 Like