ZFS Pool Planning with second hand disks

Hi all, I’m new to everything here. Was prompted to look into and decided to put together a TrueNAS system because my alma mater decided to severely limit our alumni accounts to 5GB, and I decided to go with NextCloud as a Google Photos replacement. I was fortunate enough to find a system already installed with TrueNAS Scale for $100 on FB Marketplace, and the previous owner even threw in 35TB of old HDDs he wanted to get rid of anyway. Here’s the rundown on the state of the disks:

7 x 3TB (6 of them hovering around 42000 hrs runtime, last one refuses to run any SMART test saying “Short offline self test failed [device not ready]”)
2 x 4TB (both hovering around 2000 hrs)
1 x 6TB (3600 hrs but failed SMART tests)

Besides NextCloud Photos, I mainly plan to use the system for file backups and possibly local media streaming like Jellyfin someday. It will be off most of the time except for the few hours a week that plan to use it. Given how old the 3TB disks are, should I create one VDEV with high redundancy (Z2 or even Z3) for them and for the 4TB drives create a separate VDEV with mirroring?

I’m guessing the last 3TB drive that refuses to run tests might be dead, but would appreciate pointers in case there’s a way to get SMART working. And for the 6TB drive, I can share the detailed smartctl output in case there’s a way to avoid its bad segment and keep it running for a bit longer.

Thanks!

Those disks are BAD if they failed SMART. Avoid any HDD that are SMR (Shingled Magnetic Recording). You want CMR (Conventional Magnetic Recording) drives only.

You need to give your hardware details otherwise we are just guessing at what you have and how many SATA connections, etc.

My personal advice…

  1. Ditch both disks that have already failed SMART tests.

  2. Retest all the remaining disks with a SMART long test and a SMART Conveyance test and ditch any that fail.

  3. Then check the SMART attributes for bad sectors.

Any drives which still look OK…

  • RAIDZ2 for the 3TB drives (because of their age)
  • Either a separate mirror pool for the 4TB drives or include one in the RAIDZ2 pool and use the second as a hot spare for that pool (and accept that you will use only 3TB of the 4TB).
4 Likes

Thanks for the guidance, I will probably go with RAIDZ2 for all 3TB plus a 4TB and keeping the other 4TB as a hot spare, pending the SMART tests. (unless you shouldn’t mix SATA with SAS drives in the same vdev)

Just to clarify, if I start out with 6x3TB and 1x4TB in a vdev but over time replace all the 3TB with 4TB, ZFS will automatically expand the vdev so the original 4TB is fully utilized, correct?

More details on the disks:

3TB: all Seagate Constellation ES.2 SAS 7200rpm
4TB: both HGST Ultrastar 7K4000 SATA 7200rpm
6TB: HGST Ultrastar 7K6000 SAS 7200rpm

The SAS drives are connected using an LSI 9211-8i HBA. (I wasted a regretful amount of time trying to flash a 9240-8i card from IR to IT and I think I bricked it as neither sas2flash or megarec are able to see it. If anyone wants it and is willing to pay for shipping or is near San Jose, you can have it for free)

System details:
Case: Antec Three Hundred Mid Tower
Mobo: Supermicro X9SRL-F
CPU: Intel Xeon E5-2650 8 core 16 thread
RAM: 16GB ECC
SSD: 120GB Kingston A400

TrueNAS-SCALE-23.10.2

I don’t think you’re supposed to mix SATA and SAS in a VDEV?

Maybe consider updating TrueNAS Scale to latest production, unless you have a reason to stay at 23.10.2.

You might need additional case fans or fans for the drives.

1 Like

I don’t think ZFS actually cares. It’s probably best not to mix vastly differently performing drives, and it can be a waste to have different sizes.

I think the real answer here is that you have a lot of free kit and cost is an issue, and this is going to mean that the technical build will be a compromise to some extent and not match best practices. (If money were no object then things would be different, but…)

So IMO the trick is to balance the savings with the risks (and the risks are a combination of the technical compromises and the importance of the data you are storing).

My previous suggestions were essentially based on eliminating the higher risk items.

If it were me I would accept risks with current hardware, but once I start buying new replacement hardware over time I would try to migrate to a more consistent and supported configuration i.e. migrate to using only SATA drives and eliminate the SAS drives.

No issue with ZFS, but it’s certainly best not to mix SAS and SATA on the same HBA connector.

Shouldn’t be a problem there either, realistically.

1 Like

All of the 3TB and 4TB drives passed SMART long tests except for three of the 3TB ones. What was just seen in one drive (“device not ready”) is now happening in two others, and it appears that they are just spun down. I tried a few commands to try to force them to spin up but no luck so far, but I don’t think they are dead yet. Any ideas to get them spinning again are appreciated.

Regarding the concern about mixing SATA/SAS drives, all my SAS drives are attached via the HBA while the SATAs are connected directly to the mobo. The X9SRL-F has 7 more SATA connectors that I can migrate to over time as the SAS drives get replaced.

(Free bricked LSI 9240-8i card still available)