I have 56 - 9.1 tb drives, 1 - 2 tb nvme, and 2 - 500 gb ssds (os). I’m trying to find the sweet spot of performance and capacity (like everyone). 80% of my data is media and 10% is VMs. Then some miscellaneous documents and stuff. Sitting about 150 tb consumed as of now. I would like to have a spare in the pool also.
I’m still in process of removing enough disks. I’m getting close to the minimum I need to start migrating, but am not sure what layout to do. I’m hesitant for a bunch of mirrors as I’m hoping not to end up with that “low” of capacity.
Capacity wise, 4x14-Wide RAIDZ3 VDEVs should give you 70% or so usable capacity with a decent amount of redundancy. RAIDZ2 would be closer to 80% (but with a larger array like that I’d be less comfortable using it ).
Network data c. 135TB of media files, VMs c. 15TB.
A few questions about your requirements to help decide what to do with the SSDs/NVME?
How do you access the data over the network? SMB or NFS? (Are your network writes Async or Sync? Will an SLOG help? If media files are rarely written, probably not.)
How much memory will your server have after you have used some for VMs? (Will a L2ARC help? For rarely accessed large media files, I am guessing not, but I may be wrong.)
15TB of VM data is too much for SSD. If these VMs were native, how would you use SSDs?
If it were me I would buy a cheap small NVMe for the boot drive and swap out the large one to use somewhere else in the future.
My gut reaction is that it is not worth using the SSDs for either L2ARC or for SLOG for network data or for metadata storage. But I am not an expert.
I might consider using them as an SLOG for VMs if I knew for certain that it would really improve performance and if performance was important.
Alternatively I might use the 500GB SSDs as a mirror pair to hold apps and e.g. Plex/Jellyfin metadata for streaming.
For the network data, you will probably want RAIDZ2/3 perhaps in vDevs which are 7 or 8 wide (because these are divisors of 56). Given the number of drives and likelihood of a drive failing you probably want to keep back a few drives to use as global spares.
I don’t have any experience with VMs on SCALE, so not sure whether you should use a small RAIDZ or mirrors for the VM data (my gut reaction would be to use mirror pairs), and also not sure whether an SSD SLOG mirror would help with VM writes.
With spares in mind, perhaps 6x9-Wide RAIDZ2 arrays would be most ideal, keeping two as hot spares to replace any failures as and when (RAIDZ3 is of course ideal but if capacity is important, it becomes a balancing act)
I typically keep VMs on mirrored NVMes for better read IOPS (which from what I’ve read previously is generally recommended for block storage)
Hosting VMs = mirrors, under 50% use
For 15 TB, that would be eight (ten) drives in four (five) 2-way mirrors. Make that twelve/fifteen drives for safer 3-way mirrors.
That leaves 41-48 drives for media, in raidz2 or raidz3, under 80 % use. 9 TB drives
5 * 8-wide raidz2 = 270 GB raw, and some spares
4 * 11-wide raidz3 or 10-wide raidz2 = 288 GB raw
My data is access via iscsi with a couple smb/nfs shares.
At the moment I’m limited on ssds and the nvme. The 2 in the system are already used for the os. The nvme is currently used as the cache and it’s 3200 gb. All the hdds are sas. The chassis is a ucs 3260. So I’m limited on regular drives and pci adapters
Jeez, just looked up the chassis. Makes my little lab look like a glorified USB in comparison!
I’ll keep spirits high by remembering it’s not about the size of your storage… it’s about what you do with it
Your uses call for two pools with two different geometries. The mirror pool can evolve at will, but the raidz# bulk storage pool requires an initial critical decision about vdev geometry.
Then you can create at least the first vdev, move data in it, remove drives from the old pool, add another raidz vdev, rince and repeat. (And possibly run a rebalance script at the end.)
For the network data / media files, several RAIDZ2 vDevs combined into a single pool because you don’t want too wide a vDev otherwise resilvering will take forever.
I was looking at a raidz calculator. With using 4 drives for my vm pool, 52 drives in the media pool. 2 reserved for spares. Would 5-10 wide work? Or is that too wide? That would give me 318 TiB of capacity.