Mirror VDEV Configuration with Mixed NVMe SSDs PCIe Gen 4 & 5

Hello,

I have three SSDs: two Kioxia CD8 (15 TB each) and one Kioxia CM7 (15 TB).
Is it safe to use these three drives together in a mirror VDEV?

Aside from potential performance loss, are there any other implications I should be aware of? (most important is reliability - self healing etc…)

Thank you!

A 3-way mirror will perform as follows in normal circumstances:

  • Reads - 3x IOPS and 3x throughput of a single drive
  • Writes - 1x IOPS and 1x throughput of a single drive

And you will get double redundancy against both complete drive loss and bitrot of individual files.

1 Like

Safe, yes.

But why? Do you need the extra read performance? Would you be better off with a 2-way mirror plus a hot spare [that way you are not putting write cycles on the third drive]?

If the SSDs are all new I like to mix things up so that they all do not accumulate writes at the same rate. For example, setup a 2-way mirror with a hot spare, then after 6 months remove the hot spare and use it to replace one of the mirror pair. Use the removed SSD as the new Hot Spare. This will ensure that you have 3 SSD each with a different amount of writes.

1 Like

The consequences of this depend on the activity.

If the SSDs are quite full and used predominantly for reads, the resilvering writes may actually cause more wear than leaving them as part of the mirror.

What kind of system are you using, what are the specs? It’ll matter more with NVME.

Understood, thank you!

Great idea, I am considering it, thank you!

I am not that intenst on them, great point!

It is an AMD Epyc Genoa 9174F, with 384 GB of RAM, Truenas Runs on top of proxmox, the drives are passed to it.

A three-way mirror is not going to help you when/if Proxmox eats the drives because you used direct drive passthrough instead of the recommended passthrough HBA and blacklist in Proxmox.

I did not use HBA passthrough, I passed the drives directly, proxmox can’t acees the drives anymore only tns can.

I am not using an HBA, the mothterboard (Supermicro h13ssl) has MCIO connectors and I used these, I populed the PCIe slots with graphic cards & NiCs.

These are NVMe drives: They are their own “HBA”.
NVMe drives are passed as individual devices, and then should be blacklisted.

3 Likes

I specified “If the SSDs are all new”, so they should be starting empty.

If the workload fills the mirror and then drops back to mostly reads, then, at worst, replacing a mirror component with the hot spare will put the same writes on the hot spare.

Both the CM7 and the CD8 listed by the OP has a rating of 1 TDW/day (-R) or 3 TDW/Day (-V). So unless the OP is writing 15 TB per day and they have the -R (read intensive) version they are well within usage spec.

I have to admit I am not using these drives to thier full potentioan (they are small and quiet compared to any HDD), what I expect of them is to be reliable, and saturate 2 GB / s in most cases.

What I care most about is to make sure that I won’t lose data.

The blacklisting part is somthing that I omitted, thank you for mentioning it.

1 Like

Thank you for the correction!