I have run TrueNas now for 6 months plus and I am happy with its, pure home lab use case Backups + docker JellyFin + next cloud and of course experimenting
The system is a HP SFF Z2 with a i14700k 64gb RAM, HP RSC, Intel onboard 1gb NIC and Intel 10gb SFP Nic.
I ran it with the two 1tb Samsung Nvme’s that are in Hardware mirror raid for the OS (that part should stay) and for space I had two Crucial P3 Plus 1TB in Mirror which are now full… so I backed it up and since I learned a lot and think its time for a fresh start:
I went some deal shopping and now I have
the two Samsung 1tb NVMEs in Hardware mirror raid which should be the OS (*they are on a x1 PCIe slot card) (Used enterprise gear as all of the workstation)
But I got a PCIe x16 4x NVME card and added 2 more of the Crucial 1tb P3+
the mainboard slots each been fit with 3 WD Blue 2tb NVME’s and the only SATA Drive slot has now a 8tb Iron Wolf
So effective I now have
1TB OS Disk* (x1 Card)
3 2TB NVME (Mainboard)
4 1TB NVME (x16 card)
1 8TB NVME (Mainboard)
And the thing is full so I cant add anything
So the question is what is the best Setup?
Since its a bit of an odd config disk wise but it is what I have to go with. Thank you for your time.
No, it shouldn’t. Those drives are grossly oversized for the boot device (about 50x larger than they need to be), and hardware RAID should never be used with ZFS.
There really isn’t a good setup for the hodgepodge of drives you’ve assembled–you should have given some thought to the layout before buying a bunch of random stuff that won’t work well together (and, really, before deciding to use a SFF PC as a NAS).
But with that said, the least-bad storage arrangement I can think of for this random assortment of hardware would be three pools:
Anything that you bought off Amazon (or similar) in the last 30 days that isn’t useful - return it for a refund and use the money more wisely.
Make sure that your hardware won’t lead to pool corruption - just don’t go down a route where you use inappropriate hardware and things appear to work until they go catastrophically wrong and you lose all your data.
I am not sure why people recommend against striping across RAIDZ1 vDevs with different widths. So perhaps @dan can explain why he is suggesting that the 3x2TB and 4x1TB RAIDZ1 vDevs should be in separate pools rather than combined into a single pool?
Broadly speaking, I don’t believe it’s a good idea to combine vdevs with different levels of redundancy in the same pool, just as I wouldn’t suggest combining a RAIDZ1 and a RAIDZ2 vdev in the same pool. It’s a valid configuration, strictly speaking. TrueNAS will almost certainly complain about it, but it will probably allow it. But that’s the basic reason I don’t think it’s a good idea.
In this case, there really is no good configuration for the set of devices in question–whether the configuration I suggested is more or less bad than putting all the SSDs into a single pool as you suggest, I don’t think it’s much of a difference.
A 3x2TB RAIDZ1 and a 4x1TB RAIDZ1 have the same redundancy (i.e. 1) - they are just different widths and so have different overall block sizes (e.g. 8KB vs 12KB) which could result in some weird performance issues.
However, whilst I wouldn’t recommend it either, I cannot see any reason why striping across a RAIDZ1 vDev and a RAIDZ2 vDev would be any more problematical. (I think the theory is more that the extra redundancy of the RAIDZ2 is wasted if the other vDev only has single redundancy).
Suppose you have 2x 8-wide RAIDZ2 vDevs. You could do RAIDZ expansion on one and increase it to 12–wide whilst leaving the other as 8-wide. How problematical would that be?
That’s what the system came with, I am aware of the waste but there is no other good option besides buying a smaller drive, that are most likely less reliable, Its a HPE NVME raid card which would be used in servers, so I guess it cant be worse than a normal NVME which I would need an additional card for since the slots are already at max capacity.
the X16 4x nvme adapter is the only Amazon Item I chose everything else is from our Data Centre guys, some even still in box sealed and I paid next to nothing. So yes if I just had the 3k EUR that this system probably would cost, in cash I would build it differently, and it would be different discussion. But I am trying to do the best with the stuff at hand.
Thanks to user Dan and Protopia
the 3x 2 TB in RAIDZ1 so this would result in 3.5TB usable space
the 4x 1TB in RAIDZ1 should be 2.5TB Usable space
and the stand alone Spinner can be with its 8tb a perfect backup for the above
Agree. Using raidz expansion it is actually possible to add a fourth drive to the first vdev and make that a balanced stripe of 4-wide vdevs. Previously, that would have resulted in a permanently imbalanced geometry, which would have been bad, but it now seems acceptable.
You cannot upgrade the raidz1 to raidz2, and cannot remove vdevs, so the pool is permanently imbalanced.
Why less reliable? SSDs a boot drives rarely fail.
For home use, you could get a 10€ second-hand 100 GB drive, an even cheaper M.2 to PCIe adapter, use that for boot in the x1 slot and recover the two 1 TB drives for extra storage. 15€ for 2 TB is rather good value…
Yes - I understand it being unbalanced and as you do I believe that it is better for pools to be balanced. What I am unclear about is what the impact is in reality if your vDevs are unbalanced?
According to the Reddit page you linked to, ZFS has used a different space allocation approach since v0.7 (which was some time ago - the current version is v2.3). I don’t know how the current algorithm works - I suspect that this is part of it, but not the whole picture.
As usual you have to be VERY careful when you analyse results to draw conclusions.
Disk performance three entirely separate bottlenecks - seeks, IOPS for lots of small I/Os, throughput for large I/Os.
Multiple vDevs give you better IOPS performance but not better throughput performance and I suspect that this test was throughput limited rather than IOPS limited.
Throughput (which is actual data written) for RAIDZ depends on the number of data blocks (excluding parity blocks), so (assuming that all the disks in the 13xRAIDZ2 and 2x 8x RAIDZ2 have similar throughput characteristics and similar seek characteristics and similar fragmentation and even spread of existing data and …) …
A 13x RAIDZ2 has 11x throughput. 2x 8x RAIDZ2 has 12x throughput.
So yes - you would think that the 2x 8x RAIDZ2 would have c. 9% better throughput than the 1x 13x RAIDZ2, and so something was skewing the results for these tests. However, there is IMO no indication that this is caused by the 2x vDevs being disks of different sizes - it could be different cache size, it could be different rotational speed, it could be something else.
And in any case 2x 8x vDevs with different disk sizes is a different kind of skewing of vDevs than e.g. a 12x RAIDZ2 vDev striped with a 8x RAIDZ2 vDev.
The impact is on safety/resiliency, and overspending for no result.
With a raidz level imbalance, you have spent an extra drive for resiliency in the raidz2 vdev for little actual benefit, as the pool resiliency is limited by the raidz1 vdev: Lose two drives here, lose all.
With raidz width imbalance, it is less severe but still: The narrower vdev would resilver faster than the wider vdev, but what’s the benefit? The worst case scenario is still losing drive(s) in the wider vdev.
Well whilst we share a belief that balanced equal vDevs are best, my views on the pros and cons differ:
Mixed RAID levels - a mixed RAIDZ1 / RAIDZ2 pool will still benefit from additional redundancy on the RAIDZ2 vDev. Yes, if you lose 2 drives on the RAIDZ1 vDev then you lose the pool, so it would be better to have RAIDZ2 everywhere, but suppose you lose 2 drives on the RAIDZ2 vDev then you still have a pool, so there is some benefit.
But I agree, if you want to safeguard against 2 drives failing simultaneously in all combinations including both in the same vDev, then you need RAIDZ2 across the board.
Mixed widths of data disks (excluding redundancy) - what this creates is different block sizes across the 2 vDevs - an 8x RAIDZ2 has 24KB block sizes, a 12x RAIDZ2 has 40KB block sizes. I have no idea how ZFS handles this, and thus no way of even guessing what the performance impact might be.