Yes, that is all correct. “If one was going with a hybrid pool anyway…” “…redundant special vdev…”.
In a lot of cases I have observed that my spinning disk storage is fast enough for throughput, but I notice poor performance in things like directory listings and operations. The server has plenty of memory (64GB for a single user system, about 55GB available for cache). I’m not sure if I am game to try a special given the risks, but it’s an interesting idea.
If you have a sync-write workload and want them to have good SLOG performance, then yes. The preferential order will be SLOG → Special → Pool - and there’s good odds that a special vdev even without PLP will be faster than spinning pool disks at handling sync writes.
The issue is NOT speed, but data integrity. SLOG or Special vDev used as ZIL, without PLP is not data safe for synchronous writes.
PLP does not have to be super cap based, (mini-battery). It can be writes to SLC cache, or direct write to the final non-volatile cells. As long as the data is in stable storage, (aka NOT in DRAM cache), before the sync write is acknowledged.
This also implies that the in pool ZIL, (without SLOG or Special vDev), needs to be PLP. Meaning in the past flash storage was not common for data vDevs, but that’s changed in the last 5 or so years. I wonder if this has impacted in pool ZIL sync write data integrity?
If I understand ZFS rewrite correctly, it does NOT re-stripe the data. Some people want to use zfs rewrite to re-stripe RAID-Zx data after adding one or more data columns. This zfs rewrite does not do that as far as I can tell.
And it is not just snapshots impacted by zfs rewrite. This will impact block clones and hard links, both of which will increase storage after a re-write.
In-flight PLP is not required for data safety - synchronous writes handle that. ZFS does not trust your disks and as such sends SYNCHRONIZE CACHE for sync writes that need to land in the ZIL (whether that’s in-pool or on a SLOG) and will not reply back to client that the write is complete until the SYNCHRONIZE CACHE command completes.
A lack of PLP makes sync-write performance suck but is not required for safety.
PLP at rest on the other hand is required for all devices in ZFS - SLOG, special, or pool members - this is the guarantee that the device is self-stable with regards to wear leveling, its internal FTL, and more, and won’t spontaneously ruin something when it decides to relocate bits from physical location A to physical location B while retaining the same LBA upstream.
PLP-at-rest used to be limited to an SSD-based risk, but because HDD’s didn’t want to be left out, someone invented SMR to let spinning disks get in on the failure domain as well.
The window of risk opens when a drive claims to honor the SYNCHRONIZE CACHE command but firmware assumes or overrides and sets the IMMED flag on the inbound command - which counterintuitively isn’t “immediately synchronize” but “immediately return TRUE” or in other words lie about the safety of the cache flush. Thankfully said drives were mostly limited to a very early generation of SSDs from manufacturers who have since Chapter 11’d themselves into oblivion.
i bought dramless ssd for use with truenas. at the time all i thought it was cheap. didn’t think it mattered, but apparently it does.
i use the ssd for installing the truenas apps and os. that said, i didn’t particularly notice issues….
but AI said, it defeats the purpose for using ssds when using dramless for truenas. so there is that.
my point is, make sure you use the correct ssds for truenas, dont make my mistake.
but considering prices in Q4 2025 going into 2026, this would be too costly to undo. so since it just works right now, i left it as is. but lesson learned for next time
so then you’d be asking, what then are dramess ssd good for. it’s for your gaming pc as like a secondary drive for games. your bios needs the setting enabled (usually the default is enabled) which lets the ssd use your ddr5 ram instead of it’s own. so for gaming, it would just work fine.
But for OS, dont use dramless. Also if you use something OBS, dont use dramless either.
qlc is still worse compared to tlc last i checked
as for the sudden power loss protection, all i know about that was crucial ssds had this feature, which was why i bought them previous for my nas use. but since they are winding down sales for crucial branded ssds, i dont know what ssd is a good choice now for that. but definitely dont get dramless ssd for truenas.
I ordered a couple used enterprise SSD to try out a special vdev mirror. My plan is to use it for all metadata and then use the small blocks feature on a per-dataset basis to store specific datasets like databases and ix-apps on the SSD. I will report back the results.
The metadata fits but for whatever reason it is retrieved extremely slowly in some circumstances — I gave the example above of Immich thumbs which can take ages to ls -lR, but there are others. In any case I am OK with doing an experiment at under $70 delivered.
Yep. I know some people are sick of hearing me say it, but someone is reading it every time and going “well I’m sure glad I learned that before I put my data at risk”
I promised I would update once I installed the special vdevs.
I bought 2x HGST enterprise 400GB SAS SSD pulled from decommissioned servers at >95% health. The cost was under USD $70 total delivered.
On TrueNAS nightly 2026-01-31 (for zfs 2.4.0) I built a new pool of 2x 12TB spinning disk mirror and the 2 SSD as special vdev mirror. It looks like this:
I also set special_small_blocks to 16M on these datasets to force all files onto the SSDs:
ix-apps: done after setting the apps dataset but before docker started and downloaded any containers or made overlay fs, and used zfs rewrite just to be sure, but TrueNAS could be more helpful here by allowing a few zfs options like special_small_blocks to be configurable on ix-apps (encryption would be another good one).
Data was migrated using cp -prv to ensure everything was rewritten.
Results:
With the spinning disk being absolutely hammered, directory listings and apps performance is amazing. From a user experience perspective, it is like the system is idle until you do something data-intensive like copy files or start a video.
Pathological example of ls -lR in Immich thumbs completes in around 10s vs. not successfully completing over hours on spinning disk.
The fusion approach is elegant from a filesystem hierarchy perspective. To get the same result out of a separate SSD pool I would have bind mounts everywhere.
Bonus: containers and their databases and read-intensive application metadata (eg. Immich image thumbnails) are also now on SSD so respond much faster.
Interesting observation: by number of I/O shown by zpool iostat -v, when the spinning disk is fully saturated with writes about 10% (by eye, not accurately calculated) of writes are also going to the special vdev for metadata updates, returning a few more IOPS for the data vdev to use.
This is so far the best performance improvement I have made at minimal cost on modest hardware. It’s relatively easy to do; creating the pool with special vdev was done all in the GUI, and I only had to pick out which datasets I wanted pinned to SSD with special_small_blocks. I’m very happy so far.
Warnings so far:
Make a backup, test it before proceeding, and be prepared to use it.
TrueNAS nightlies are not even alpha releases. Don’t use nightlies unless you are ready to lose data. (Having said this I have only found minor bugs, nice job so far TrueNAS devs.)
A new pool created on TrueNAS nighties will be incompatible with release versions until they catch up to zfs 2.4 due to new feature flags. This is a one-way trip.
If the special vdev fails you will lose data. Make the special vdev at least as resilient as the data vdev.
The special vdev is subject to zpool general limits on vdev removal and can only be removed if the whole pool is mirrors. You should also consider this a one-way trip.
The TrueNAS UI provides no real visibility into the special vdevs other than that they are there and healthy.
Avoid overfilling special vdevs. When full, writes will go to the data pool silently, defeating the purpose.
The special vdev space appears as additional pool space, so also take care to not overfill the pool.
System config:
Dell T330, 64GB RAM
LSI 3008 SAS connected to the T330’s internal SAS bays