I currently run a mirror VDEV with 2 Samsung EVO 970 Plus 2 TB for my VMs, which are hosted on a separate machine with a direct 10 Gbps connection via SFP+. The SSDs are in an Asus Hyper M.2 X16 Card V2, sitting in a Supermicro X9SRi-F with 4x4 bitfurcation enabled.
The pool is above 70% usage and I would like to extend it with another mirror.
The VMs are mostly low usage but business-critical. So reliability trumps performance by a wide margin, when it comes to requirements.
Does anyone have experience with either model or can recommend something else?
Did Samsung iron out the kinks they introduced with the 980 series? I am very fond of the 970 EVO Plus, too, because while not the most modern design they have good write endurance and deliver consistent performance.
The first tests of the 980s by the regular pundits resulted in dramatically dropping write performance for continuous writes once a certain internal cache was filled. I avoided them until now.
It’s funny you can still get 970 EVO Plus new. Only downside is they only go up to 2 TB.
Recommendations? Until now I used sort of a policy to prefer manufacturers who build their own chips as well as the devices, which to my knowledge are Intel/Solidigm, Micron/Crucial, Samsung. That’s what I used in recent years, at least. Not a single disaster, yet
You’re forgetting Kioxia (Toshiba). I’ve nothing bad to report about their XG (consumer) and XD (DC) drives, but I haven’t stressed them much. Worst abuse might have been non-accelerated Chia plotting some time ago, and a XD5 won over a Samsung PM1725 at that game.
As to the EVO 990 Plus, the specified write endurance is on par with the 970 Plus. The latter, while still being available, has risen considerably in price. I had bought mine for 118 Euros (incl. VAT) per 2 TB in Novemebr 2023 and they are now more in the 160+ Euros range.
I currently tend to go for the EVO 990 Plus (2*4 TB).
If the rebalance script is used then would not the pool “write cache” essentially be the summation of the current drive type, plus the new drive type? ie it doesn’t matter what is purchased because the write cache will always be growing due the addition of each mirror vdev? So the actual write cache of the new drives is less important? And if this is true, wold it not follow that drives with different capabilities could become unbalanced leading to a degradation of performance in the long run and similar quality drives would be preferable for the expanding pool of mirrors?
I am not yet considering the global effect in a ZFS application. I only consider for now that starting with the 980 (to my knowledge) Samsung started to build their SSDs with a tiered architecture. So there’s a fast SLC flash cache and a lot of QLC (?) storage.
When writing large amounts of data in a continuous fashion, write performance tanks once the SLC cache fills.
If that is tolerable for your particular application you need to decide for yourself. It’s probably fine for a home NAS. It’s not for our data centre and our proServer cloud architecture where we haven been relying on “prosumer” SSDs for the last couple of years quite successfully - no incident because of SSD failure so far.
But drives that do not maintain their write performance regardless how long you write to them are something I’d rather avoid.
990 EVO Plus: DRAMless, TLC. Avoid. TBW will be fine, but that lack of DRAM can crimp you in heavy write or heavy random read/write applications.
990 Pro is DRAM and TLC (as was the 970 EVO Plus and the 980 Pro); that one is fine. Yes do make sure it has the latest firmware so it doesn’t destroy itself :harold
My rule of thumb is: TLC and DRAM. TLC for TBW; whatever slight price advantage QLC has is gone when the drive fails early. DRAM for sustained IOPS under heavy write; why limit potential applications for a 25-50 USD price difference.