Today’s TrueNAS Tech Talk brings the (tentative!) release schedule for TrueNAS 26 - from BETA1 to the .0 release, find out when you’ll be able to get your hands on the next generation of TrueNAS software, as well as some of the key features that will be included. Kris and Chris also talk about one of those key features, the new dataset tiering functionality coming to TrueNAS 26. All this and eight viewer questions ahead today.
is data tiering enterprise only?
Yes.
“One of the key features coming is the ability for you to have - on Enterprise installations of TrueNAS - hybrid flash and spinning pools. And then tiering of datasets (or shares) in between them.”
I have my own question re: Tiering.
I’m guessing this is a TN-only feature and doesn’t make use of any OpenZFS capabilities. How does this work if a hypothetical TN Enterprise customer makes use of tiering and then wants to export their pool to a non-TN system?
Does the pool remain compatible with OpenZFS sans-TN? Are there caveats/preparations that should be taken into consideration?
The tiering step is a interesting evolution from present practice of either having two pools or using sVDEV and forcing certain datasets into the SSD portion. Visualizing pool “fill” by tier, etc. will be a step up from where we are today since sVDEVs do not enjoy any GUI “fill meter” widgets, AFAIK.
The adoption of Zvol sVDEVs is also cool, though I personally prefer mirrors for my use case.
What has me confused however, is why any CE user should participate in testing a feature that will only be available to enterprise customers? (aka Tiering) Any pool we create with Enterprise-features presumably is lost once we upgrade to 26.0 or beyond? Or will the CE grandfather pools created under the 26.x beta?
I don’t see how the CE user base here can do extensive tiering tests, gather experience, etc. if there is no expectation of this feature being available to us for any longer than the beta testing period. As in, why would I want to set up a pool, fill it up with data, play with it, etc. if I know that I will have to rebuild the pool in a matter of months?
Maybe other folk have just more free time, more NAS, etc to play with than me?
I’m also confused why a Enterprise customer would want to adopt a TN-only feature rather than stick to standard solutions that are platform, OS, and App-agnostic? God forbid something happens to iXsystems, our pools are fine because they run on anything that can handle OpenZFS 2.3+. In contrast, create a tiered pool that can only be handled by TN and you have vendor-lock.
From a business continuity POV, I do not see the use case. I’d implement zVOL sVDEVs instead and force datasets that need to be fast onto the sVDEV. That leaves the admin with a pool that is portable and which is arguably not as difficult to administer as setting Tier quotas (or whatever the implementation will be) by dataset.
But presumably I am missing something and tiered storage has been requested by enterprise customers, otherwise it’s unlikely that the team at iXsystems would be working on it.
Tiering implementation will be a standard ZFS pool… importable by other ZFS systems. No risk to users.
Some parts of Tiering will be available in the CommunityEdition. Other parts for managing Tiering are an Enterprise feature.
In future we’d like to enable some Enterprise features via TrueNAS Connect… but we are still in early days there.
OK, apologies but that was unclear to me from the Youtube video.
Now might be a good time to start documenting just how the CE and Enterprise edition will deviate from another re: tiering.
If Connect adds features not found in the CE edition, that would be good to document also. I.e. a standard good, better, best comparison table that ticks off the differences.
Agreed… that is work in progress. TrueNAS 26 hasn’t been BETA released yet… full release is still months away. T3 provides previews…
Dataset Tiering is an interesting one. Shame CE isn’t getting it as such, but I kind of get that. Wouldn’t be against being able to pay a license or sub to have enterprise features on roll-your-own hardware though, especially given iX still doesn’t offer redundant OS drives in the R-series (that’s M-series HA and above territory only).
On to the interesting part. I don’t believe iX/TrueNAS is coding anything special at the storage engine level to make this work. This is middleware work, wrapping existing OpenZFS 2.4 capabilities into a GUI-driven workflow with dataset-level policy controls. The zfs features/options are all already there and any CE user could do this manually.
Here’s a rough breakdown of what’s under the hood:
The ZFS special vdev allocation engine does the tiering. It handles block placement decisions between your NVMe special vdev and your HDD data vdevs based on the policies you set.
special_small_blocks is your placement policy control. Set this per-dataset/zvol and any blocks at or below that size threshold get routed to the special vdev (NVMe), larger blocks stay on the HDD pool. This is the main lever for controlling what lives on flash vs spinning disk.
zfs rewrite handles block relocation in-place after a policy change. It rewrites existing blocks without touching metadata, timestamps, or filenames.
zfs snapshot + zfs send | zfs receive is the migration path for moving a full dataset or ZVOL between different pools. The dataset will be briefly unavailable during final cutover, so stop any VMs using the ZVOL before confirming. Assuming this is the workflow TrueNAS 26 will be orchestrating behind the GUI migration. Additional feature.
TrueNAS 26 Tiering in theory wraps all of that in a GUI with per-dataset tier visibility, migrate and rewrite controls, and a progress log for the underlying operations.
The concept images below illustrate my vision for the interface, showing NVMe and HDD pool utilization, IOPS and throughput for each tier, dataset level tier assignments, special_small_blocks configuration, and capacity tracking, all presented within a single unified view.
