Hello everyone.
New to the forums, first post looking for advice on rebuilding a pool with new drives coming in. This will be a somewhat long post as I try to be as thorough as possible.
I have a main system running TrueNAS Scale with 64GB RAM and a 4 x 16TB RAIDZ2 pool. The drives are 17 months old and the pool is currently about 35% full. I recently fitted this system with a 10GbE NIC as I intend to use it for VFX workloads. I want to rebuild my pool into a striped mirrors configuration for a more performant pool.
I ordered some parts that haven’t arrived yet:
- New backup system
- 4 x 16TB drives identical to the current ones
- 2 x 500GB M.2 NVMe drives
The pool in the backup system will be configured in RAIDZ2 as it will be offsite and I won’t have 24/7 access to it so greater redundancy is preferred.
When I receive the new parts I plan on using the new system initially only to burn in the new drives. After that I need advice on how to best utilize the new drives to configure both pools.
I’ve narrowed it down to 3 options, but I’m not sure which is the best approach.
-
- Create the striped mirrors pool using the new drives in the main system.
- Replicate the datasets to the new pool.
- Import the old RAIDZ2 pool into the new backup system.
-
- Create the backup RAIDZ2 using the new drives in the main system.
- Replicate the datasets to the new RAIDZ2 pool.
- Destroy existing pool and create striped mirrors pool using the old drives.
- Replicate datasets to the new striped mirrors pool.
- Import the new RAIDZ2 pool into the new backup system.
-
- Use 2 of the new drives to replace 2 drives in the main pool. Do this one drive at a time and wait for each to finish resilvering.
- Use the 2 removed drives with the other 2 new drives to create the new striped mirrors pool in the main system, using one of each in each mirror VDEV.
- Replicate the datasets to the new pool.
- Import the RAIDZ2 pool into the new backup system.
Option 1 is the one with less friction. New performance pool has all new drives and the old pool continues as it is.
Option 2 I’m the least fond of as it’s using all the aged drives for the performance pool and has the added cost of replicating the datasets twice.
Option 3 I hadn’t even thought of until recently where I read somewhere that it may be good practice to mix drives of different ages in a striped mirrors config as it avoids symmetric wear and tear in each of the mirrors. Same would apply for the RAIDZ2 pool I presume. This option requires 2 separate resilverings which would take considerably longer than the other 2 options.
I am leaning toward option 1 but I am open to suggestions and advice. Is there another option I haven’t considered?
As far as the NVMe drives are considered, the situation is as follows:
The main system is currently using about 20GB of memory on services. I bought an Intel NUC, that will be running either proxmox or xcp-ng, to delegate compute to it in order to free up more memory for ZFS cache. I’m still not sure if the main system will be running any containers or VMs after all this but the aim is to have as few as possible.
The plan was to use both NVMe drives in a mirror VDEV pool for containers / VMs and maybe iSCSI for the NUC. This pool would be replicated onto the main pool, which would be backed up on the new system.
I’m not at all familiar with all the different types of cache drive available in TrueNAS as I have no experience with them. From the reading I’ve done it seems that the only type that might be beneficial to my use case would be L2ARC, but its benefit may range from marginal to inconsequential. Also 500GB may be excessive and a waste.
I don’t expect to receive the new hardware for at least another 2 weeks, so I will continue researching this further. However, I’ve reached a point where a sanity check, general thoughts, and recommendations are more than welcome and will be greatly appreciated.