It’s been a ride and a journey and great learning experience. I started with FreeNAS 9 something and the platform has served me well over the years. However, things change, and I’ve now decided to move on from TrueNAS to a simple, vanilla Debian setup. I wanted to share why, and how, in case it’s useful to others.
Why?
A few key reasons led me to this decision.
Product direction: iX’s vision for TrueNAS has become hard to follow. There have been many sudden changes in underlying technologies and no clear long-term roadmap and certainly not one demonstrably built around higher-level customer needs and market gaps. The community edition is being used as a testing ground, which makes me uneasy for something as critical as secure data storage.
Stability: iX now prioritises the 6mth release cadence and new features over stability. 25.04 introduced half-cooked, breaking changes that were reversed shortly thereafter and this is just the most recent example. It’s a choice of approach and obviously up to iX to decide, but I don’t agree with it (for a NAS product that I want to be able to rely on), it doesn’t instil confidence, and it sits particularly bad in combination with the other point directly above.
Engineering approach: this is more of a subjective point, but IMO some parts of the system feel unnecessarily complex, yet still fall short on the basics. Examples from the top of my head: auto-expansion and disk partitioning, IPv6 support. In both cases, it has been difficult for iX to explain both how the product is intended to behave, how it actually behaves, and why it behaves in that way. Neither of these two are complex problems to solve at its core. The IPv6 stack in Debian, for example, is mature and solid, so adding config management on top shouldn’t be this complex. Yet TrueNAS’s implementation seems extensive without delivering expected outcomes, a case of “reinventing the wheel” in a way that detracts from rather than adds to the overall product exprence.
Communication: Kudos to iX for supporting this community and for the active participation of staff. That said, beyond immediate problem-solving, communication often feels defensive or corporate. Constructive, two-way dialogue is limited, and several recent threads have been closed when things got contentious instead of being explored further.
All said and combined, these concerns prompted me to look elsewhere. YMMV.
What did I install instead
I wanted to explore cooking my own from a vanilla Debian and wasn’t sure how much functionality I would have to give up – but was surprised. It took about a day to get everything up and running (half of that spent wrangling Samba ACLs).
Base o/s: Debian Trixie (very soon to become the next stable after Bookworm). Key packages (all from standard and official Debian repos):
ZFS 2.3.2 (currently) OpenZFS
ZED (ZFS Event Daemon) for ZFS event/error reporting (e.g. scrub problems and pool status)
Samba for SMB sharing
Default in-kernel NFS server
LIO for iSCSI targets (in-kernel iSCSI server)
Sanoid and Syncoid for automatic ZFS snapshots and replication to remote backup server – very easy to set up
smartmontools for S.M.A.R.T monitoring + reporting and scheduling of short and long tests
Prometheus node exporter on NAS, and Prometheus/Grafana running elsewhere for recording and visualisation of stats
What I gave up:
The TrueNAS management GUI
ZFS NFSv4 ACLs in Samba (I opted for standard Unix permissions instead to keep it simple)
This forum for issues troubleshooting etc
What I gained:
Full control over the environment and feature roadmap (i.e. what I decide myself to install/include)
Fully working and rock solid IPv6 stack
Barebones performance – partly through enabling already-available features in underlying packages (e.g. SMB server-side copy for MacOS), partly through a much more lightweight setup with only the features that I need, and no overhead for GUI, middleware, stats reporting, virtualisation etc that I don’t use
Debian’s legendary stability and famously stable upgrade path. Trixie is already quite current, and I can use backports for newer ZFS if needed before the next stable release (in approx 2 years).
I’m very comfortable with the switch so far. I’m not here to diminish the work iX does, TrueNAS is a great product in many ways. But I hope the team takes recent feedback (especially around 25.04, release cadence, software quality, and communication) seriously. There’s real opportunity here to evolve into something even better.
Thanks again to the community, this forum has been great over the years.
Even a fast cadence that puts unfinished functionality in front of users can be fine-ish with the right communication and parallel function.
If the incus stuff wasn’t ready for prime-time? Fine - if you’re comitted to a fast cadence roll it out in parallel with the old stuff, let the experimenters be first movers, etc. Or let people lag behind with updates (and thread the needle with apps…).
Rolling out a new function with a new framework? That’s fundamentally a statement that you’ve vetted the functionality and framework, aligned them with your roadmap, and confirmed that they’re in alignment. There might be rough edges - especially on a fast/fixed release cadence - but the fundamentals of “we evaluated this technology, it’s suitable, and we’re building around it” shouldn’t change unless externalities surrounding the choice change.
Incus is at least as good as it was 6 months ago. The engineering fundamentals that went in to choosing it…haven’t really changed that I can tell. If it was good 6 months ago, if it was fine 3 months ago…it’s also still fine moving forward. If it wasn’t fine 3 months ago…what are you even doing?
Good points, let us know how it all works. And any modifications you made to make it work better.
There is a “Conservative” version. Just because we download the Community Edition, does not mean you have to install the latest. This is the page that defines what iX / TrueNAS considers each release / version is suitable for:
This is a balancing act. We get several requests, (like monthly), to install later drivers, (GPU, NIC, Thunderbolt, etc…). Yet the old FreeBSD based FreeNAS / TrueNAS would have been, “pound salt, you’ll get it when you’ll get it, based on FreeBSD support!”.
With Linux, people say, but it’s Linux with better hardware and software support than FreeBSD!!! You MUST support my new do-hicky that does wizbang features and is CRITICAL to my personal use of TrueNAS SCALE / CE! And I DON’T CARE that this new do-hicky is not ever used by Enterprise Data Center users. Just put it in my Community Edition, and NOW.
You probably know this already. Just writing it down so others can understand how we got here.
There is a difference between supporting new drivers as Hardware is released and making fundamental changes to workflows / VMs / Apps, and the like.
It’s likely why we see a much slower approach to making wholesale changes in the storage side of the business. Granted, the storage side is also arguably a lot more mature codebase to start with but…
Management knows where its bread is getting buttered.
They know they have to pay a lot more attention to getting the core file server functionality right and it shows in the slower, more planned, evolution over there.
Fair point, but somewhat unsustainable if version dot-two of release N+1 deprecates release N from six months ago. With this policy, TrueNAS is effectively pushing users on a six month upgrade cycle, if only at .1 or .2 rather than .0, out of caution.
One of the toughest decisions a sysadmin has to take is when to update. Not only does this process bring risk (and not doing so also) but in an Enterprise environment a lot of digital paperwork going through RFCs and Comms plans for end users. Yeah believe it or not we can’t just randomly reboot a sever when we want .
A huge amount of work goes on behind the scenes with testing the new version and making sure all the integration with your infrastructure works. Having to do this every six months is a pain especially if the update isn’t providing you with any significant benefit.
The ability to limit this upheaval as much as possible I’m sure would be greatly received by all. This brings us onto the ideal of LTS versions whereby only major bug fixes and security patches are applied to a given version for X number of years. Pretty much like CORE is now. Probably the best and most stable version of TN we have and will ever see.
Updating a stable release to hot fix a specific vulnerability or bug is much less risk than doing a major upgrade to a new version as the later can and does often introduce many variables.
iX has not yet managed to develop a structured and consistent approach to this. Seen clearly through conflicting messages from different people and different channels over time (as you exemplify). Combined with lack of forward guidance, other than tidbits via podcasts and what not. There is an opportunity here to evolve into something much better. Suggestions have been made in other threads, and comparisons with projects elsewhere for good examples. I hope iX will take that on and develop their model.
That page is their plausible deniability for when a release blows up: “Oh, we didn’t recommend anyone actually use that release we just pushed out–even though our podcast, and our release announcement, and our CTO on Reddit, and everything else we’re pushing out explicitly did.”
Not much tuning required tbh, most packages come with sensible defaults. It’s definitely a lighter setup - active memory (i.e. the memory used by running processess) according to vmstat is in the order of hundreds of megabytes, meaning more headroom is available for ARC and other caches (particularly no python middlewared eating cpu and ram). Still, it does everything I needed it to do - regular snapshots (and pruning of old ones) and external backups (zfs send/receive), s.m.a.r.t tests and reporting, zfs scrubs with reporting, sharing via SMB, NFS and iSCSI. Prometheus node exporter with its default ZFS plugin together with Grafana provides very rich stats/reporting on everything from network, disk load and temperatures, ram and cpu pressure, arc stats, etc. Oh, and it boots in seconds.
In summary - it’s uncomplicated, lightweight, fast, future-proof and very stable. So I don’t see me going anywhere else.
Which probably also marks my last post here. Au revoir!
Well said! Well spoken. I think you said the silent part out loud that a lot of have been thinking for sometime. I too share your pain and echo your sentiments especally your fourth bullet point.
Probably IX does some patching and optimization but the real advantage IMHO is 1. The community and 2. The web ui.
While I have been a Linux user for 25 years and a Debian user for a bit less, i find accessing a web ui via phone for manageability easier than ssh. I am also looking after ease of use sometimes.
If you can live without the web ui then good for you
Nice! I was deciding between my own custom build as well. I’ve landed on Proxmox. I’m currently getting services setup side-by-side in Proxmox. Too many changes and rollbacks in TNS/CE. I need something at least stable and well documented, which Debian and Proxmox are. So win-win.
My main reason for leaving is the constant churn in apps and virtualization. I remember when jailmaker was the new hotness and this is the way it was going to go. That died because IX said incus was going to be the new hotness. Now incus is dead, so lxc is the new hotness? Who knows.
I have a life and don’t have time to be migrating from one system to the next like a drone. I have a family, job, and other activities in life.
Rolling my own and getting my services working the way I need to is the way to go for me. I will have everything setup in a CI/CD way so if something does break, it’s just a matter of pulling in all my configs and rolling out again.
I’ve used TN in the enterprise and it’s a great product in that use case. I would use them again, as they have stellar paid support and install services.
I think the service I will miss the most from TNCE is the Cloud Sync functionality in the TrueNAS UI. Simple cloud backup. I’m investigating duplicati with rclone so I can decrypt my backups in B2. That is something I’ll figure out as I explore this migration.
Yeah, that was one way I was looking at. As long as it will let me backup directories, since my Proxmox ZFS pool for data will have containers, but also application data, etc as well. I haven’t fiddled with PBS before.
I know you didn’t reply to me, but thought I’d chime in since I recently POC’d cockpit as well and I just didn’t like all the dependencies it pulled in. It’s got a ways to go. I tried 45Drives Houston as well, but it’s a nightmare to manage and get working in Debian.
I’ve been using PBS for just a bit less than 2 weeks right now. I’ve used a simple backup job to an SMB share on TrueNAS beforehand.
Here are some numbers:
Daily backups that were preserved for 2 (or was it 1?) weeks; they took ~200GB.
Auto snapshots (preserved for 2 months) of the datasets took additional ~1.1TB space.
Now my PBS datastore takes 60GB. And snapshots take less than 1GB. It’s not a fair comparison, but I doubt that in the next 2 months they will go close even to the 10GB.
The takeaway – an incremental/deduplicated backup solution is a must have in case you are using zfs snapshots for your (VM backups) datasets. 60GB and 200GB are not much different to me. But 60GB and 1.3TB definitely are.
Oh, and I use hourly backups (for most of the VMs/containers) now, because I can.
I like how Proxmox is integrating snapshots in ZFS. It was nice to zfs list -t snapshot and see them there. It does this with btrfs as well, which is nice. For now, I need to script something to automate that since it’s not in the UI yet. Hopefully that will come in the nearish future.
So in regards to backups of my VM’s, all my configs are in local datasets, as well as app data. I don’t backup my VM’s, other than snapshots and roll back a dataset locally if needed. Data is the only thing I push to B2.
I will have to test to see if PBS has this flexibility. Sounds nice by your description though!