I am a former Synology user who has recently transitioned to TrueNAS, and I am seeking guidance on upgrading my current storage setup.
Currently, I have migrated four Western Digital Red 6TB drives from a Synology DS1517+ to a TrueNAS SCALE-based UGreen DXP4800+. I configured these drives in a RAIDZ1 array, which has been operational for the past two months.
Yesterday, one of the drives experienced an error, prompting me to consider my options for replacement and future expansion. Rather than replacing the faulty disk with another 6TB drive, I am interested in upgrading to larger disks—specifically, two 20TB IronWolf Pro drives. My goal is to migrate from my current four-disk array to a more power-efficient two-disk setup, while also leaving space to install two SATA SSDs in additional drive bays.
My initial plan is as follows:
Purchase two 20TB IronWolf Pro drives.
Replace the failed disk in the current RAIDZ1 array with one of the new 20TB drives.
Allow the array to resilver and rebuild.
Once resilvered, replace one of the remaining 6TB drives with the second 20TB drive.
Create a new VDEV (virtual device) using the two 20TB drives to maximize storage capacity.
Migrate all data from the existing RAIDZ1 to the new, expanded configuration.
Finally, delete the old RAIDZ1 device and expand the new mirrored setup using the freed space, thereby consolidating storage.
My questions are:
Is this approach technically feasible within TrueNAS SCALE?
Are there recommended best practices for performing this upgrade?
Should I consider an alternative method to achieve my objectives?
Thank you in advance for your assistance. I appreciate your guidance on ensuring a smooth transition.
Incidentally (if I dare say…), step 5. would destroy two drives in the raidz1 array (can’t make a vdev with actively used disks), thus loosing all data.
Create another pool (not just another vdev) with the two 20 TB drives and replicate all data to the new pool.
You may attach two of the old drives to USB adapters during the process to free bays.
Note that steps 3 & 4 are more or less the same. You create a pool with at least 1 vDev, which can consist of a single 20TB HDD.
And step 5’s wording is a little odd. Normal interactions with ZFS data is through pools and datasets, not “old vdev to the stripe”. So more appropriate wording would be:
Move the data from the old pool to the new pool, which copies all the dataset & zVols.
Did you mean “mirror” of the new, 20 TB drive instead of a “Striped”?
Otherwise the method can work this way.
Some remarks:
Buy ONLY CMR drives, ZFS does not like SMR drives
Burn in your drives before use! (especially, if you want to use a single drive without parity in the beginning. 3x6TB is 18TB, so only copying all the data will take several days, even through a sATA 3 interface. I would not leave my system unprotected)
There are some threads here that contains links to practical tools to do this
If you have access to any normal computer with 6sata ports, you can install a copy of TrueNAS on that, import your old RaidZ1 create the new mirrored array with both (already tested) HDDs and do the copy on it.
Then just put back the two, 20TB drives to your “production” NAS and use it onwards.
Yes. But I’d rather suggest to remove the unhealthy drive rather than a good one, and then to put it on USB adapter so it can still provide redundancy.
I dont recommend using USB external drives as a part of any array!
It will not work.
Belive me, I tried it…
My array falled apart like almost every day.
Maybe, he can use a single drive mirror pool as an external HDD to copy the files over to, but then if he does not have any USB to sATA interfaces, but have a spare PCIe slot on the device, then buying a PCIe sATA contorller like a dual channel one, is a better solution. (connect the two, 20 TB drives to this card and create the mirror there. It is better than USB external drives)
Neither do I recommend it on a permanent basis.
But during a resilver or replication from a pool to another this is a perfectly valid way to temporarily maintain redundancy despite not having enough SATA ports for all the drives involved. It works, and I did it.
Well, I would only recommend it, if it is a single drive!
If it is any array, it is most likely to have issues with it.
18TB is a LOT of data!
It will take days to copy it from the current array to the new drive.
That means that the probability of a failure is very likely, and then he need to scrub the pool, what is again can easily take days on this size.
I still think that a cheap 2 channel, PCIe sATA control card is the best solution for this.
I dont have a PCIe slot in my DXP4800 plus, correct.
I went ahead with the solution proposed by myself and confirmed by Arwen.
I now have the new pool with a single disk and all the data replicated (around 10gb - i had 6 tb free. I’m currently in the process of running a scrub of the new pool before i move the pointers from my shares to the new dataset.
It’s worth mentioning that i have a full Backblaze backup as well, so i think the risk of running both the original pool and the new one without redundancy is somewhat mitigated.
You may not need to do that if you rename the new pool with the name of the old one; there’s a Ressource for the procedure.
As to editing your own posts, have you followed the tutorial of our resident @TrueNAS-Bot ?