To draid or raidz2? (this question again...)

Hey All, i have a supermicro box I’m building, 2x 6244 xeons (8 core 3.6ghz) and 60 data drives, all sas with various capacities of 10 - 14tb, 2x 480gb optane 5800x for zil, 2x 6.4TB MU for special and a 16TB for l2arc with 768gb of ram. (2x 64gb sata doms for boot). all ssds are nvme.

draid certainly seems appealing, however this will be used for a VMware datastore with large file servers on top. I am not sure how the writes to NFS from VMware will act for draid, and if it’ll fill up the special vdev or not, I am about to find out however and lab it up…

Or just stay the classic tried and true raidz2?

What’s the take on this?
Of course I am labbing it up and running things on top for curiosity sakes, and I’ll report back with what I find.

That’s a pointer to NOT use dRAID but make vdevs with drives of matched sizes.

Meaning virtual machines? Zvols?

How many spares do you plan?

2 Likes

For that scenario I would prefer mirrors any day.

2 Likes

took a combo of advice and made a quick excel sheet and with all the numbers landed up on a 7x raidz2 8 stripe with 4 hot spares, then the 400gbs for log mirror, 16tb for cache and 2x 6.4 for metadata, this should be spicy enough for me.

It’s for large sequential block writes (multi client dr target) until of course restores or transfers need to happen, mirror was just way to much space for me to loose.

Will benchmark and test back.

Thanks!

Recomended size for L2ARC is 5 times to, at most, 10 times RAM. You actually do not have enough RAM for what you intend. By half.
But with 768 GB, it is doubtful you need L2ARC at all…

2 Likes

Out of interest are you planning on running Windows Server in a VM/s and then serving the TrueNAS storage out as file shares?

If so why not cut-out the middle-man and just share directly via TrueNAS?

Or are you also planning on running non file share VMs also in the datastore?

Because funny enough we are the middle man - it’s a dr target for multiple clients and thus abstraction and isolation are required.

It’s for mixed mode usage which is why the slightly overkill server and cache, came from having multiple servers with a max of 2 vdevs to this box with 7 and have already noticed in the stress testing very massive differences in performance and latency.

One of the VM’s is nearly over 60TB, and with that much I prefer the ease of a NAS over a SAN, hence why no zvols as well.

Ah I see, cool. Will you use replication to backup the server to another similar one?

If they opt for it yes, will have two datasets within pool one for replication and one without, with different tags in vCenter and VCD.

Part of the other reason going for NAS over SAN, makes recovery and volume shrinking MUCH easier.

1 Like

For others reading this, dRAID has certain limitations:

  • All writes allocate a full stripe width, even ones that don’t need that much space. Not sure if it is needed to write the padding blocks, because of parity…
  • dRAID’s main selling point is integrated Spare devices and high speed restoration of redundancy after disk failure. So if few or no spares are desired, then perhaps dRAID is not suitable for that purpose.
  • Can not grow a dRAID parity group by adding a column / disk. Any dRAID expansion is adding another dRAID vDev, including parity(s) & integrated spare(s).

While it is nice that dRAID was added to ZFS, people should be aware of all the warts, limitations and benefits of each type of vDev.

6 Likes

as a follow up, I’ve landed on the 7x 8 drive raidz2, with 4 spares.

draid had a little bit of oddities but performed very similar. What got raid z2 to fully win is the current lack of expansion by adding a column / disk as @Arwen pointed out. Might as well stick with tried and true espically when it’s working so fast anyway (around 8Gb/s storage vMotioning 200TiB as a nfs datastore). I have 422TiB usable

and interesting enough, the T3 youtube video released this weekend talked about you no longer need 1gb per tb for l2arc cache.
Thanks all!