Expanding mirror process

I’ve just gone above 80% usage on my current mirror, and was looking to expand things even before I saw the warning about performance suffering after exceeding the 80% mark. I’m still pretty new to TrueNAS and ZFS in general (but not RAID in general, I’ve worked with mdadm and LVM and some Solaris RAID software back in the day), so I was wondering if there’s a good guide that I should look at for how to go about doing so. I was thinking of getting another pair of drives, but with it going from 2 → 4 drives, maybe a mirror is no longer the best option and maybe it should be something along the lines of RAID5.

A general overview of ZFS as used by TrueNAS would also be helpful, too… like I see jobs for scrubbing and such, but I don’t know what that actually means. Likewise, I’m getting notices that there are ZFS pool features I can upgrade to, but I have no idea if I should or should not do so. I’m not going to roll back to Cobia, so that’s not a consideration for me, but I don’t know what the effects of those new features might be. On the other hand, the alerts keep popping up every time that I reboot, so it seems like it’s suggesting I do so.

I’m sure there are guides around this, I just dont really know which ones to look at, so a quick pointer would be appreciated and I’ll gladly RTFM :smile:

You can add another mirror to your pool. Depending on your needs your may want to look into rebalancing because newly written data will most certainly go to the new mirror, offsetting some of the performance benefits.

Raidz1 (raid5) is also possible but requires that you destroy the pool and restore from backup.

You can find answers to your other questions as well in the documentation :wink:

Here’s a zfs primer:

3 Likes

I am FAAAAR from an expert but without knowing what you have… another option is to just get bigger disks… It is really easy to disconnect a disk, install a larger disk and the system re-silvers…
Just do one disk at a time

1 Like

It’s 2x16TB currently, so it really needs to be more disks, rather than bigger disks.

Got it…

1 Like

Indeed. And, for the sake of resiliency and data safety, your choice is between:

  • 3-way mirrors for flexibility, at at cost;
  • raidz2 (not raidz1) for space efficiency (with large files), at the expense of flexibility, and needing to frontload the costs of optimally 6-8 drives.

Good reading:
https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html

2 Likes

Given your stated level of expertise, I would advise you to go for the simplest option because this will be the lowest risk.

Given that you have a an existing mirror, the simplest option would be to add 2 more disks (of whatever size you think you need), and add them to your existing pool as a new mirrored vDev. (In case you haven’t read the documentation yet a pool can consist of multiple vDevs, and a vDev can have a variety of redundancy options - but the general wisdom is NOT to mix vDevs of different redundancy types within a pool.) You can probably do this from the TrueNAS GUI without resorting to the command line (where it is easier to make a mistake).

(The alternative of converting to RAIDZ1/2 will be more complicated and unless there are really pressing reasons better to be avoided.)

If you add another mirror vDeb, as other people have pointed out, unless you rebalance the drives all the existing data will be on the old drives and your I/O will be unbalanced with most writes on the new drives and most reads on the old drives. However, (depending on your workload) TrueNAS / ZFS does a pretty good job of holding data in cache and doing sequential read-aheads, so my personal advice is to wait and see whether you have any performance issues or I/O bottlenecks before bothering to do a rebalance.

1 Like

I imagine a rebalance itself is heavily IO intensive and can take a good chunk of time? Most of my content is video files and images served via Plex with only myself as the viewer, so it’s much more read heavy than write heavy (outside of adding new content), and highly inconsistent demand at that. I am starting to set up more apps and migrating some services from WSL that I’ve been Dockerizing, but those are more network than disk. I’m suspecting a rebalance, for my current use cases, probably isnt worth it.

(I am using BackupPC to back up two servers atm, but I don’t plan to keep doing that long term, just until things are migrated to the final cloud location where I’ll switch to a native backup system.)

My NAS is heavily Plex also - and I get 99.9% read hit rate with only 10GB of memory. My experience is that the LAN / wifi was the bottleneck not the disk subsystem.

I think that ZFS tends to store files in contiguous blocks, so even after rebalancing I imagine that streaming a large media file is likely to hit only one of the vDevs and not both, so I am not sure that rebalancing across 2x vDevs which are mirrors is going to get you noticeably better performance.

You might get marginally better performance from reading off 4x RAIDZ1 stripes, but if you are able to read fast enough to stream off a mirror pair then that is sufficient and ZFS / Plex will read-ahead and cache anyway.

1 Like

Probably worthwhile investing in another two 16TB disks, and later you could switch over to use 6-way RaidZ2 if you wanted.

Alternatively, consider purchasing another 4 16TB disks, setup a degraded raidz2 with the 4 disks and two offline sparse files.

Replicate your data from your mirror to the new pool.

Replace one of the sparse file disks with one of your mirror disks.

Replace the other. And now you have a 6-way RaidZ2 made with 16TB disks for about 64TB of capacity.

This is a nice layout as a general file share for media.

My own uses 8TB disks, and I’ll be replacing that with 16+TB disks one day.

Btw, the above would also “rebalance”