Update: See marked solution. My initial logic was heavily flawed and another win for “there is a reason the best practices are what they are”.
Hi all, new to TrueNAS (using 25.10) but am well aware that partitioning drives is not best practice nor fully supported by TrueNAS. Still, I am having a hard time seeing it being the wrong choice for a homelab (I understand as well not the TrueNAS target customer). With partitioning I would see the ability to have very high perf for part of the storage and bulk space for the rest.
I put together a 12 HDD (10TB enterprise sata 7200 drives), 4 SSD, 4 NVME system w/ 192 GB ram. The use is mixed use with a few VM’s, several docker containers, and a much larger amount of file storage that generally has low throughput access requirements.
For performance and redundancy 9 HDD drives are put into mirrored groups of 3 and then striped. Now, with this setup in a standard configuration it would be roughly 27TB of usable high performance storage. Given somewhat slow internet speeds having some larger datasets locally is beneficial so 27TB isn’t a huge amount of space. An alternative option is to partition the drives. First a 2 TB partition used for VM’s and databases in the mirror description above resulting in 6 TB of high perf read/write. Second, a 7 TB partition put in a RAIDZ3 configuration over 10 drives (rather than just the 9 mirrored) for 49TB of usable space.
Negatives of this approach:
- TrueNAS UI doesn’t support partitions anywhere. It cannot be used for creation, but it is easy enough to create this from the CLI with the pool exported and then imported through the UI. Hot spares cannot be setup via the UI but adding it via the CLI seems to work.
- I am not sure about potential minor performance degradation when the RAIDZ pool is used at a low level if the fact two pools use the same drive. ZFS and my basic understanding of the ZIO scheduler assume the ZDEVs are dedicated disks.
- Scrubs should be staggered to make sure both pools are not scrubbed at the same time
- You can’t really prioritize one pool over another (as far as im aware) meaning low priority IO on the RAIDZ pool could hamper higher priority IO on the Fast Pool.
Positives:
- 56TB of usable space, nearly double of a pure mirror configuration
- The 49TB pool can take 3 full drive failures without data loss
- Things needing the absolute best performance still get that with the fast pool
An alternative would clearly be to use different drives for different pools. This would greatly reduce performance though.
Further Details
In worst luck with the mirroring, it would allow two drives in the same group to fail and still not need to restore backups. The system is on a UPS and most things critical are continuously backed up remotely (although restoration a pain). As such, I am comfortable with async enabled for all storage.
The SSD’s and NVME’s are largely just existing had hardware but fairly performant. These VM’s are used as my primary desktop, a development machine, and the containers go from realtime networking applications to web servers. Performance in the VM’s is pretty important activities can include things like compiling chrome source which its 10’s (100’s?) of thousands of files and can take over half a day. Dataloss is pretty unacceptable too, while will backup semi often the internet connection is not great so remote recovery would require hd’s to be mailed or a long download. If the server burns down I accept it will take me awhile to restore.
Given async=off there is no need for a vlog drive. The NVMEs/SSDS are not all the same models as they were just on hand but they have decent speeds. The NVME’s would be good as a SPECIAL vdev mirrored together potentially. The SSDs maybe a L2 cache.
Yes I did say 12 drives and only talk about 10. At least one I will likely keep as a hotspare and 12 if something of a hard limit and mirror groups of 3 only go into 9 or 12 so likely only using 9 or 10 of the drives.