Move pool to new HDDs - Best Option ElectricEel

Good evening,

long time no see, just found out that we´ve a new forum, as the old one is read only. I guess thats a good thing, the server runs since 2018 and there was no need to read any how to´s since then. Thanks TrueNAS :wink:

All those years later I´m ready now to replace all 5 2TB Drives with 3 12 TB HDDs. As I´m moving from 13.0 to 24.10, Core to Scale I´d like to do a fresh install. New Plugins etc.

I´ve read many articles over the course of the last weeks and now with Electric Eel there are a couple new zfs features but just to be sure to do it right, I opened this thread.

Backups of the important data is in place, I´m able to connect all drives simultaneously to the system. Burn in will be performed previous to data transfer via the script GitHub - Spearfoot/disk-burnin-and-testing: Shell script for burn-in and testing of new or re-purposed drives Which is still the way to do it?

I will just install Electric Eel on the OS drive via USB (Yes via GUI is possible but I guess fresh is fresh no traces to old configs etc. or does this even matter?) Old Pool will be just a simple import, new pool just to be created.

Please correct me if I´m wrong but the one by one, resilver and than the pool has the 12 TB size is not possible due to moving from 5 HDDs to 3.
zfs remove feature is not working with RAID Z1.

So if those assumptions are correct the replication task old data set to new data set checking the Full Filesystem Replication Box, or creating new datasets and do a replication child by child over is the fastest/best option? Then do the disconnection of the old pool and removal of old HDDs.

ZFS send/receive is the same like replication?

Thanks for helping out.

PS: Its true that you can´t bind a network card/interface to a specific app/docker? Like Nextcloud Plugin only via a specific network card - if yes, can someone please direct me in the right direction.

I would suggest against running 3x 12TB drives in RAIDZ1.

I suggest reading iX's ZFS Pool Layout White Paper and Assessing the Potential for Data Loss | TrueNAS Community.

Fresh install is good. Do note that SCALE requires an SSD as a boot drive.

Correct.

More or less.

Afaik, yes. Things might change in the future.

I’d do that: (maximum security)

  1. created a mirrored pool A of two 12 TB drives
  2. rsync the contents of your old raid-z1 pool to the mirror
  3. create a striped pool B with one 12 TB disk
  4. using rsync: copy pool A from time to time to pool B
  5. create a Raid-z2 pool C using your old 2 TB disks, so you’ll have 6 additional TBs for less important stuff that isn’t covered by a backup disk.

IMO you should just by another 12TB drive and do 4x 12TB RAIDZ2.

1 Like

Interesting. To be honest I haven´t thought much about the way longer resilvering time with these 12 TB models - I see your points. In those 12-13 Years of use (first gen WD Red HDDs) I only had one drive to act up a bit but not dying. Which doesn’t mean I´m save with the new seagate models, I know.

Boot drive is an 120GB M2 Sata SSD so I´ll be fine on this regard.

In the first iteration of this base hardware I had FreeNAS 11.2 with 3 LACP 1 GBit Nics and one 1 Gbit Nic for the cloud on a different vlan, those 5 drives were also a performance adder. Now that a lot of users of this build are just linked via wifi (laptops) and TV´s with only 100 Mbit, read/write performance is not so important anymore. TrueNAS 25 Fangtooth will support Networkconfig in the UI for Apps maybe its better to wait till than an get the 4th HDD.

The real important documents/pictures etc. I´ve backed up on another system in the same house and at my own place again. Thats why I thought less HDDs = less energy consumption and less upfront costs.

Burn in test via the script mentioned above is still the current way of doing it? I´ll do that first anyway and have still time to decide. I saw a video the other day from Lawrence Systems that currently when adding drives to a pool its not possible to change the RAID Z but that might be possible in the feature?! Not sure about using mirror I guess Z2 would be the way to go.

Have seen very few people using that script the past few years, but that does not mean it’s unsuitable for its job anymore.

If I were to suggest a “forum’s suggestion” based on the usual reccomendation I would likely point to jgreco’s solnet-array-test | TrueNAS Community.

Do note that you can create a degraded RAIDZ2 pool with 3 drives if you don’t want, or are not able, to buy the 4th disk right now.

I would not count on that for, at the very least, the next couple of years.