Swapping to new drives with limited sata ports

Hello, I’d like to replace my current setup of an 8-wide RAIDZ2 vdev with 4TB drives with a new 4-wide RAIDZ2 vdev using 14TB drives. However, I only have two spare SATA slots in my PC. What’s the best way to transfer the data from the old drives to the new ones to complete this replacement? Would it be possible to create a pool in TrueNAS using external USB adapters?

I assume that you dont have a spare PC with more SATA ports, in that case I’d try to get an cheap PCI-E HBA.
There are some very afforable ones from LSI which have 4 SATA ports. :slight_smile:

I would not try to use USB adapters as these are known to be unstable (and slow), and that you don’t want for this procedure.

This is effectively a duplicate of Replacing drive with a larger drive

The 2 questions/topics are actually different. :slight_smile:

In this topic here he asks how he can go from:

  • a 8wide z2 pool with 4TB disks
  • to a 4wide z2 pool with 14TB disks

with the challenge of not having enough SATA ports to connect the new 14TB disks.

The thread you linked to is about replacing disks inside the existing (8wide z2 with 4TB disks) pool where one of his disks is failing.

The 8x4TB RAIDZ2 pool has a useable size of 6x4TB which is 24TB. To make migration easy, you therefore need to create a new 14TB RAIDZ2 pool with at least 2-drives of space i.e. to do it normally with full Z2 redundancy you would need at least 4 slots free. (See later caveat if you have < 10-11TB of data without snapshots.)

I would not use USB drives - they are often subject to USB disconnects, so you might find the data transfer problematical. However I believe that there is a way to achieve this internally if you are prepared to live with single redundancy during the migration - and this is how I would achieve this by temporarily switching to the equivalent of RAIDZ1 redundancy (which if all drives are perfectly good I would feel was an acceptable short-term risk - but you need to make your own judgement on that) …

  1. Offline a drive from the existing 8xRAIDZ2 pool putting it into degraded state (with single redundancy) and leaving 3 spare SATA slots. (You say in your other post that one of these drives has started showing signs of failing - use this drive as the one to offline).

  2. Physically replace that offlined 4TB drive with a 14TB drive and install 2 more 14TB drives in the spare slots.

  3. Create a new RAIDZ2 pool with the CLI using the 3x14TB drives and a sparse file on the old pool. Export the new pool from the CLI and import it into the UI. You then offline and delete the sparse file.

You now have two Z2 pools both of which are degraded effectively equivalent to Z1.

  1. Then you do a full replication from the old pool to the new pool.

  2. Finally destroy the old pool, replace one of the old 4TB disks with the 4th 14TB drive and resilver onto it.

    If you are exceptionally cautious then, given that you now have two full copies of your data, you could offline a 2nd 4TB drive instead leaving the original pool in fully degraded but useable state whilst you resilvered the new pool.

Note: If your current data without snapshots is < 10TB-11TB and you are not worried about keeping the snapshots, then you can probably achieve this with a 3x14TB RAIDZ2 and use EE RAIDZ expansion to add the 4th drive later. You can either keep the existing pool fully redundant and create a degraded 3x RAIDZ2 with the 2 spare slots and the above approach, or you can degrade the existing pool and create a full 3x 14TB RAIDZ2. However, on the whole, I would still probably go with the original approach outlined above because it avoids the need for rebalancing after the RAIDZ expansion.

2 Likes

Just a note.
Considering that ZFS pool expansion is a quite new feature I’d give that a bit more time before using it on a production system.

Currently you might end up with less space than expected after the expansion of a pool when it has data on it. Even running the rebalancing script (which might be a scary step for a novice user in itself.) does not seem to provide you the expected space increase.

using USB enclosures to do an online replace will always be preferred to doing an offline replace…

h*ll I’ve ran 16 drives via USB as I was building out my SAS JBODs as that was what I had on hand… it was stable and worked well (if a bit slow). just don’t use hubs or anything too out there…

what I used in particular was yottamaster 4-bay units, as long as you aren’t trying to use them as 4kn with linux (which doesn’t work due to a bug in linux specifically regarding non-ATA/SCSI) they are fine especially for temporary use.

I’m considering this

  • I have a RAIDZ2 vdev using eight 4TB drives, but i’m not utilizing the full capacity.

  • Step 1: Mirror Setup

  • Add two new 14TB drives in a mirror configuration.

  • Copy existing data from the current RAIDZ2 vdev to these mirrored drives.

  • Step 2: Drive Removal

  • Remove four of the 4TB drives to free up space in the system for additional configurations.

  • Step 3: Temporary RAIDZ1

  • Create a new RAIDZ1 vdev using the remaining four 4TB drives.

  • Copy data from the 14TB mirror back to the RAIDZ1 vdev.

  • Step 4: Final RAIDZ2 Pool

  • Add the other two 14TB drives, and reconfigure the pool to form a full RAIDZ2 setup with four 14TB drives.

I think this will work for me, I appreciate the suggestions greatly! This way I won’t have to do a raidz expansion.

1 Like

If this approach works for you even though it requires copying the same data 3 times then great.

My proposed approach requires only a single copy operation and doesn’t require ElectricEel or RAIDZ expansion or USB drives.

If you can install 2 drives (for the mirror) then you can make a degraded 4-wide RaidZ2 with just two drives.

Replicate everything. Remove the old pool (a backup) and then replace the two degraded disks with new disks.

Job done. One copy.

1 Like

Only problem with this is that after removing the old pool and resilvering two drives you don’t have any redundancy. Which is why I partially degraded the old Z2 pool to one redundant drive and created the new pool also as a partially degraded Z2 with one redundant drive i.e. at all times you have at least one level of redundancy.

The redundancy is an entire backup pool with dual drive redundancy… ie the old pool :wink:

But yeah, an anlternative approach is to remove a single drive from the original pool and use single drive redundancy on both pools.

The first pools removed drive can always be re-added.

And you can then debate about which is safer if you want

1 Like

There’s another limitation I’ve overlooked: I only have 9 SATA power connectors. Without using a ROMEX-to-SATA adapter, my plan would likely be as follows:

  1. Remove a drive from the old pool – This will free up a power connection.
  2. Connect the two 14TB drives – Use the available power connectors to set up these drives.
  3. Copy the data – Transfer all necessary data to the 14TB drives.
  4. Remove additional drives – Once the data is safely copied, remove more of the older drives as needed.
  5. Add the remaining 14TB drives – Complete the setup by adding the rest of the larger drives, freeing up enough power connections.

This seems like it may be the easiest plan given the situation,

But i also rather not push my luck… I’ve ordered the drives.

I think the question now is whether to take out 1 or 2 drives from the old pool so i can put in 2 or 3 of the new drives. Or bite the bullet and get a sata pci card + molex adapters. So many options, not enough experience to determine the best one.

1 Like

This is just me, but (unless I have a backup of all my data) the procedure of working with degraded pools would be too scary for me.

If financially possible, I’d get a cheap PCI-E Card that provides the required SATA ports. Or one of those m.2 adapters that have SATA ports on them - I’d not run my server 24/7 with that adapter but I’d do a migration (copy) of my data from the old pool to the new pool.
That will also be faster than shuffeling disks/data around between (temporary) pools.

Yes - I would agree that a temporary HBA to provide more SATA ports thus allowing you to migrate without any degradation would be preferable, but it assumes that your existing hardware is suitable to do this.

In my suggested approach, although you will be running temporarily with degraded pools, they are only partially degraded i.e. they are still redundant drives with all the benefits that provides, just not the extra benefits of double redundancy.

Sorry - but this is NOT the case.

From the point that you destroy the old pool until the point that the first of the resilvers has completed, you have ZERO redundancy.

The simple answer is for the sake of a couple of $ go buy a power cable splitter.

To limit your migration to using only 9 drives will mean either:

  1. reducing the level of redundancy on one pool to zero (not advisable); or
  2. doing 3 data migrations instead of 1 (less risky from a redundancy perspective, but significantly longer to achieve and with more manual steps a greater risk of human error).

In my suggestion I never destroyed the pool. I was under the belief the old pool had different size disks.

You don’t have to destroy the original pool…

1 Like