Greetings. I’ve been using Linux Mint with lots of RAID-1 arrays for ~7 years now. It looks like this across 2 computers:
10TBx2
4TBx2
12TBx2
12TBx2
16TBx2
8TBx2
8TBx2
etc.
I knew from day 1 this configuration was very wasteful but I was a LINUX beginner and MDADM seemed very daunting to me without a GUI. So it seemed that setting up RAID-1 arrays following Digital Ocean’s guides were the best solution with my lack of knowledge and the need to have a solution that I could troubleshoot and repair myself. So now I have a degraded array because of a dead HDD. While it’ll be easy to replace the HDD, it’s also a good time to transition to TrueNAS.
Is there a good way to:
Re-use all these existing HDDs? My understanding is that ZFS-1 requires 3 HDDs so to use the 4TBs, 12TBs, 16TBs, I would need to buy a 3rd for each.
Should each HDD type use different VDEVs? For example, the 12TBs should be a different VDEV than the 16TBs.
I’d like to have one master ‘volume’ (pool?) created by ‘joining’ all VDEVs together, shared via singular network share.
In the future I’d like to have the ability to add more HDDs. Would this be a simple matter of adding 3+ HDDs in a VDEV which are then added to the pool?
Once a HDD starts failing or the HDDs in a VDEV start to get old, I’d like to replace them with a larger HDD so that the VDEV grows. I think currently if I have a 10TB VDEV with 3x HDDs, a HDD dies, if I replace the failed HDD with a 20TB HDD so that the VDEV now is [10TB,10TB,20TB], then only 10TB of the 20TB HDD is used. Once I upgrade all the 10TB HDDs to 20TBs in that VDEV, can the VDEV be expanded to use all 20TB? The upgrade path would go something like:
10, 10, 10 TB (30 TB used)
10, 10, 20 TB (40 TB used)
20, 10, 20 TB (50 TB used)
20, 20, 20 TB (60 TB used)
My limiting factor is cost. I’m a graphics freelancer (part time) and $400 for a hard drive is a lot of money for me. If I spend $1000 to migrate to TrueNAS, I’d like to have an architecture that is somewhat future proof allowing re-use of existing hard drives. Thanks for your help!
ZFS does that quite naturally…
With your current setup it would be a “stripe of mirrors” (similar to RAID 10); what you intend is a “stripe of raidz1” (= RAID 50). Mind that losing a whole vdev means loosing the whole pool, so that more redundancy would be advisable—and in any case a good backup strategy.
It need not, but each vdev is limited by the size of the smallest member so in practice you indeed want to segregate drives of similar sizes.
You can add more vdevs or, with caveats, widen existing raidz vdevs (3-wide → 4-wide → 5-wide, etc.). However you cannot increase raidz level.
For clarity, the first three configurations amount to 3*10 TB, giving ca. 20 TB of raw space, of which you may use about 16 TB (80%)—filling a ZFS pool to 100% is a Very Bad Idea™.
The last configuration can then expand to 40 TB raw/32 TB usable.
You mentioned that more redundancy would be ideal. I agree. I also didn’t realize that the whole pool would go offline if a single HDD failed in a VDEV with ZFS-1. I think I’ll need to set up ZFS-2 instead so that a VDEV won’t take the pool down. The idea is that it keeps going even if a single HDD fails, then I can replace the HDD and re-silver all while operations to the pool continue.
Also, is there is a way to make the TrueNAS install itself redundant? I have an existing TrueNAS installation using ZFS-1. I use it as a backup to the other computers running MDADM. What I described above would replace the MDADM RAID-1 configs. That server has TrueNAS installed on an SSD. But if the SSD dies, my backup is cooked. I have no idea what to do if this SSD dies and takes the TrueNAS install with it. Is there a way to do something like RAID-1 with the TrueNAS installation HDD?
RAIDZ1 can tolerate 1 drive failing. What was meant here was, that if you have a pool consisting of multiple vdevs, you lose the pool if you lose 1 vdev. Again the vdev can be a stripe, mirror or RAIDZx.
So constructing a pool out of multiple RAIDZ1 vdevs is still a bit risky. You can lose 1 drive in each vdev and be ok. But not 2 in the same vdev.
You simply reinstall Truenas on a new SSD and load the config file (that you have to keep and update from time to time).
Without the config file you still have all your data after you import your old pool, but all the configuration will be gone.
Since this is still at planning stage, I’m talking about “best practices”, bearing in mind that, of course, these cost money…
“ZFS is not a backup” so, no matter how resileint the pool design is, you should have some backup—at least of the most important data.
But if you do not want to ever have to resort to this backup, the pool should have more than one degree of redundancy. Raidz1 should withstand the loss of one drive… provided that no further incident occurs while resilvering (resilvering is “at risk”). Raidz2 can withstand the loss of one drive and still cope with further incidents while resilver (i.e. double redundancy is safe… as long as one reacts immediately to the first incident and do not wait for a second incident to occur).
For capacity storage with good resiliency, 6 to 8-wide raidz2 can be condidered as a “sweet spot”: Resilient, quite space efficient (67-75% vs. 50% for mirrors/RAID 1) but not too wide.
With your current collection of drives you could make two 6-wide raidz2, striped or as separate pools: 12-12-12-12-16-16 and 8-8-8-8-10-10. Total raw capacity would be 48 and 32 TB, “wasting” 2*4 and 2*2 TB until the smaller drives in each vdev are all replaced with larger ones. (No obvious use for the 4 TB drives.) That, however, is already 10 TB more than the raw capacity of 70 TB of the multiple RAID1 arrays in the first post. But how to get existing data from here to there is an open question…