Drop Kris and I a line - t3@truenas.com - and we’ll see if we can do a viewer-questions heavy show. There’s several caveats around sVDEV use (as this thread’s existence speaks to!) and using it safely.
Didn’t realize you were Chris from T3 ![]()
Wasn’t gaslighting. I swear.
Email sent.
He’s not Chris from T3.
The guy on the T3 videos, who calls himself “Chris”, is @HoneyBadger from the TrueNAS forums.
Hi, a newbie question here. If I set up a 3 way mirror array special vdev, can I replace later on one by one the disks with others bigger and expand? Just want to make sure before making the move. Thanks!
I believe that’s the expected behavior. @NickF1227, can you confirm?
It should be, just like any other vdev.
@Constantin, I think you should add L2ARC tag to this topic.
That is a good idea. I think I will break it out as a separate resource because otherwise this likely becomes too long a post?
Is it still recommended to use Optane drive over using a PCIe 5.0 SSD? I was thinking something like the Samsung 9100 Pro.
For what use?
Also, you can’t just say Optane as it comes in different flavours. M10, 900/905p, P1600X etc which have different performance characteristics.
For a SLOG, the 900p/905p or 1600X (or better) are really good. Endurance, IOPS, Latency are all superb. Perfect for a SLOG. For anything else I would use the Samsung
Like @NugentS, I wonder why the focus on Optane for an sVDEV? The point of Optane is extreme write-tolerance, which makes it perfect for a SLOG that is written to relentlessly but (hopefully) rarely read. How data-center-centric your sVDEV drives should be re: wear tolerance is largely a question of use case.
sVDEVs with a lot of changing metadata / small file blocks (i.e. databases being housed entirely in the sVDEV, for example), should likely have a good deal of wear resistance. Pools that are more WORM likely are OK with much less wear-resistant SSDs. I chose datacenter SSDs from Intel for my pools sVDEV (despite it being largely WORM) because I like to sleep well at night. It’s the same reason I went Z3 when Z2 likely would have been good enough.
One thing I would look for in all your SSDs regardless of use case is whether they use a faster flash cache up front that is periodically flushed to slower flash in the rear. Those kinds of drives can really choke under sustained load, whether it’s sVDEV, SLOG, or even a SSD-pool for VMs or whatever. You really want to avoid these kinds of SSDs in a NAS.
It’s not just the uneven work performance but the potential for stuff to be still stuck in the volatile front-end cache and getting wiped out due to a problem (power loss, system halt, whatever) even though ZFS thought it was already written to the drive. That’s the problem with devices lying to ZFS about the CoW process being complete when it’s not and it’s also why proper SLOG devices will feature PLP to ensure the transactions on them can be written to the pool when the system starts up again.
I will try to address the small block issue tomorrow, it requires a re-write of the above. Then I will also try to address a separate resource page on L2ARC.
Endurance is one factor that makes them recommended, but Optane is also 1) extremely fast at the pure NAND level, meaning it skips over the issues with the pSLC and “front-loaded” performance you mention below, and 2) close to effectively immune to performance drops under mixed workloads vs. regular NAND.
Unfortunately Intel stopped making anything Optane in 2022/23.
https://www.servethehome.com/intel-optane-559m-impairment-with-q2-2022-wind-down/
SCM, which is a similar idea, is available from manufacturers like Kioxia
https://americas.kioxia.com/en-ca/business/ssd/enterprise-ssd/fl6.html
Allegedly Samsung is reviving Z-NAND with six products planned. FWIW.