Two pools of 20 & 24tb drives?

Hi, I’m new to TrueNAS but am looking at getting everything setup soon with either TrueNAS or HexOS which I understand uses TrueNAS. I have a mixture of hard drives although they’re only two different sizes, several 20tb and several 24tb drives.

I wonder what that be ok, or do you think it complicates things quite a bit? Ideally I’d prefer not to limit the 24tb drives to 20tb.

How many drives of each do you have in total? Any particular use case? Whats the rest of your build look like & how are you planning to connect the drives?

Nothing wrong with having two separate pools; one for 20tb and one for 24tb drives. Nor do I think it could complicate things much at all - maybe an extra 2 minutes if you’re going sssslllowwwwww (2 minutes to setup the first pool, then another 2 minutes for the second.

Thanks, I have 8 x 20tb drives and 4 x 24tb drives. They’re for a home server (Plex) and I’m planning on connecting them via the motherboard plus a 10Gtek 6-Port PCIe Expansion Card, PCIe x4 to 6X SATA 3.0 (6Gbps) Controller.

That’s great to hear it won’t be an issue :slight_smile:

Have you verified those controllers and expansion card are recommendations?

2 Likes

You can also create a pool of mirrored vdevs. Each mirror consists of either 20tb or 24tb drives:

graph TB
    pool[(ZFS Pool)] --> mirror0[mirror-0]
    pool --> mirror1[mirror-1]
    pool -.-> dots[...]
    pool --> mirrorX[mirror-X]

    mirror0 --> sda[20TB]@{ shape: lin-cyl}
    mirror0 --> sdb[20TB]@{ shape: lin-cyl}

    mirror1 --> sdc[24TB]@{ shape: lin-cyl}
    mirror1 --> sdd[24TB]@{ shape: lin-cyl}

    dots -.-> dotsdrive[...]

    mirrorX --> sdX[20TB]@{ shape: lin-cyl}
    mirrorX --> sdXX[20TB]@{ shape: lin-cyl}

Yeah. In case you plan to have a cold spare drive, you would need two different size drives.

Ok, mirrors for Plex could be suboptimal. Perhaps, 3x 4-wide raidz1 vdevs would be better in terms of space efficiency. Or some fancy raidz2 geometry.

Apologies, I listed the wrong part there, I do have an LSI SAS 9300 16I HBA Card, which seems to be one of the recommendations yes. I’ll be using them with Mini SAS to SATA adapter cables, which I hope will be ok!

The 10Gtek 6-Port was something I purchased before I asked about recommendations late year over on LTT forums. Then I got really busy and so the project just sort of went to the sidelines until now.

It seems there’s a few possible ways to do the setup in terms of drive pools?

In terms of protection, ideally I’d love to make it possible to restore if 2 drives from each pool failed, but one failure per pool would also work.

8 20TB in a Raid-Z2 VDEV and 4 24TB in a Raid-Z2 VDEV. That gives you two different pools. If you have easy to restore data, you could go down to Raid-Z1 for more space with the 24TB drives.

1 Like

Yes. You should read this guide.

Iirc, it is recommended that vdevs within the same pool would have the same redundancy. That’s why if you go with raidz2 for 20tb drives, you should do the same for 24tb drives.

On second thought, you don’t have many 24tb drives – the loss of 4tb on each will result in “only” losing 16tb. That is less than one extra 20tb parity drive.

Let’s make some calculations. You have 24x4 + 20x8 = 256TB of raw space.

  1. If you go with mirrors, you will get 128TB of usable space.
  2. If you go with 3x 4-wide raidz1 (one of 24tb drives and two of 20 tb drives), you will get 192TB of usable space.
  3. 8-wide raidz2 (20tb each) + 4-wide raidz2 (24tb each) – 168TB.
  4. 12-wide raidz2 (over all drives) – 200TB of usable space. Pure magic!! :magic_wand: :rainbow:

Performance and fault tolerance of these layouts are very different, though. You should read the aforementioned guide.

1 Like

I’d also consider tossing an ssd (or two for a mirror) for apps & vms. I don’t think you’d lose too much if you tossed the 24tb drives into a big pool with the 20s; reason being is that raidz1 isn’t recommended for drives of that size.

If 1 fails, resilvering to replace the failed drive will take a long time… During which a second unexpected failure is at higher risk & would result in data loss :open_mouth:

I’d strongly consider raidz2, even if you go with two seperate pools. Raidz expansion is now a possibility (increasing the quality of drives in an existing z1/2/3 configuration; note you are still stuck with z1/2/3 after you make the pool), so at worst, you’ll always have the option of gettingore 24tb drives if you feel bad about losing half the space to having 2 redundancy drives at the start.

100% avoid anything that isn’t an HBA (your lsi is good), flashed to it mode & lastest firmware. A lot of other options seem to work at first… Until they result in critical data loss; lotta examples on the forum :frowning:

Thanks guys, the guide there was very useful at explaining the differences. From the guide and what everyone’s said here, it looks like raidz2 or raidz3 on one vdev would be a good choice for this usecase, giving either 200tb or 180tb of raw space.

It looks like I can put some parts of Plex on my nvme drive, which should help with the reduced iops that raidz2 or 3 would bring.

Good point on the ssd as well, I have a 1tb nvme drive and two 2tb nvme drives which can be used for apps and vms too :slight_smile:

1 Like

Oh, since we’re on the topic & you mentioned you have spare ssds lying around. Please avoid creating a “cache” unless you understand l2arc, slog, and special vdevs.

All of these have their own use case that can increase performance, but unless you actually have a very specific need, none of them will improve performance. Realistically, they might even degrade it; or in the case of a special vdev put data at risk if not implemented with redundancy.

For zfs, more ram is generally king for performance gains.

…not saying that you were planning on it, but while we’re giving advice, you know? More hardware doesn’t always easily translate into more betterer.

Common trap for first timers.

Edit:
**Warning; lots of simplicfications & omissions

Thanks, the advice is honestly much appreciated!

You can put everything in the same pool as well.

  • Three 4-wide raidz2 vdevs, striped (4*24, 4*24, 4*20). Clean and symmetrical, though only 50% space efficiency.
  • 8*20 Z2 + 4*24 Z2. And grow by expanding the smaller vdev until it is 8-wide.
  • 6*20 Z2 + (4*24+2*20) Z2. Loses 16 TB in the second vdev until the two 20 TB have been replaced by 24+ TB drives but otherwise nicely balanced geometry.

:face_with_raised_eyebrow: what platform (CPU, Motherboard etc) is going to get used?

Thanks guys! I’d be running it with a i5-12600K CPU on a MSI PRO Z690-A DDR4 ATX Motherboard, 128GB (4x32GB) DDR4 3200 RAM.

Speaking of “the two 20 TB have been replaced by 24+ TB drives”, I’d been reading that expanding the storage is a bit trickier on TrueNAS vs Unraid? But I wonder if it’s possible to replace some drives in a vdev without having to wipe all the data from the whole vdev? Or if each time you make a drive upgrade or add an extra drive, you need to start the vdev fresh?

Yes, provided that the new drives are at least as large as the drives they replace.

Good to know, thanks! :slight_smile: