Upgrading Disks

I’m on Electric Eel 24.10.2.2
I have a Dell r740 with 12 disks and three Dell MD1200’s with 12 disks each for a total of 48 disks. I have two configured dRAID2 vdevs - one with 36 disks (with two hot spares) comprised of the 12 disks in the server and 24 disks in two of the MD1200’s, and one with 12 disks (with one hot spare) in the other MD1200 - making a single storage pool. All of the disks are 10TB. My metadata, log, and cache vdevs are nvme storage on a single card. My TrueNAS OS disk is a mirrored ssd.

I’m thinking of purchasing larger disks and decreasing the number of disks I have online. If I go with 24TB drives, I have another MD1200 that I could daisy chain into my storage stack, but I’m not sure what steps I need to take to either upgrade the disks or migrate data in the pool so that I can remove a vdev and decrease the number of MD1200’s I’m running.

Any advice or instructions from the community would be helpful. Thanks in advance for your assistance!

In general terms, you cannot remove vDevs holding pool data from an existing RAIDZ or dRaid pool. You cannot remove data vDevs, or metadata vDevs.

This means that:

  1. If you want to have a pool with fewer data HDDs you will need to create a new pool.

  2. If you want your new pool to have a metadata vDev then you won’t be able to reuse the old NVMe disks for this until you have migrated your data to the new pool and then when you add the metadata vDev the metadata and small files will already be on HDD, and you will need to run a rebalancing script to get the metadata and small files back onto the NVMe cards.

As an aside…

I doubt that you will actually need SLOG (because if you had VMs etc. that need it you would have some mirrors in order to avoid read and write amplification - but if you do have virtual disks or database files that need SLOG, then you would be advised to add some mirrors for these needs).

If your memory is insufficient to get a reasonable ARC hit rate then you would be advised to see what adding more memory does rather than rely on an L2ARC device.

Before you build the new pool, you might want to ask whether dRaid is actually the best choice for your new requirements or whether e.g. RAIDZ2 might be a better choice.

1 Like

I appreciate the response.
Is there a good workflow for what I would need to do if I wanted to ensure that the pool, permissions, and data all moved? There are lots of applications running on the server with lots of folders and users that I’d prefer not to have to rebuild and reassociate to a new pool.

You’re right that I don’t need SLOG, but I didn’t know that when I built it, so I have one.
Regarding RAM, there’s 768GB of it and about 625GB are being used by ZFS cache, so I’m probably okay there.

I’d be happy to discuss vdev RAID structure. I chose dRAID because of the number of drives I was using. If I go from 48 drives down to 24, do you think there’s performance or design benefit from dRAID2 versus RAIDZ2?

I appreciate your insight!

If you use replication, the ACLs should carry across as well as the data.

After migration you can probably rename the pool and it should look identical to the original.

SLOG can be removed through the UI.

As for your current L2ARC, probably worth looking at the stats to see if it is actually doing anything useful given that you already have a huge ARC.

I am not an expert on dRAID - but my understanding is that it is intended for arrays with hundreds of disks, so I am not sure whether yours is big enough to benefit or not. BUT dRaid is less flexible than RAIDZ and gets a lot less enhancement effort, so with a reduced number of drives, I would be surprised if it remains the best design.

1 Like