Does anyone else have a love/hate relationship with TrueNAS?

(This is gonna be a long one, pretty much just me ranting about various issues I’ve had with TrueNAS over the past 5-7 years and how using Arch with OpenZFS and Docker is way easier in the end, even though administrating ZFS via the CLI with more than a handful of disks is a headache. Wasn’t sure if I should place this here since it’s directly related to TrueNAS or in General Discussion since it’s a rant/opinion post)

I’ve been a long time follower/user of TrueNAS, since the days of FreeNAS, I first started using it during the early 9.x releases when I discovered ZFS (switched over from using software RAID for years). It seems that no matter how much time has passed, TrueNAS seems good in theory and each major or minor update draws me back in, but in practice it’s just headache after headache.

I’m a long time Linux user (started in '05, I’m a Linux System Engineer currently) and knew about the BSDs but never really used them, so I figured I would give FreeNAS a shot, after using Arch on my server for a few years. I loved the ease of use due to it being UI driven, but there was a lot of things I didn’t like: the UI was fugly; apps were sorely lacking, and even though you could setup a jail and install packages from the ports collection, they were usually many months out of date, if you did manage to find something up to date you were occasionally left in dependency hell which meant compiling everything from source for hours on end (I wanted a BSD jail, not Gentoo! hahaha).

When v10 was being beta tested I ran that a lot to help find bugs (and if you used it, you knew there were A LOT), but the instability of it was frustrating. Jails were out of the question due to previous experiences/nightmares; Bhyve eventually became an option but it was severely lacking in performance and features compared to KVM, also at the time I was using it to run an Arch VM it would regularly freeze after like 2 days. The devs were never able to figure it out, no issues anywhere else, even running a VM in Arch, so it was TrueNAS/FreeBSD specific. I finally got frustrated and went back to Arch w/OpenZFS and everything was great…except for managing 20+ disks via the CLI.

After v10 was scrapped and it was announced that v11 would be taking it’s place, along with a(nother) UI refresh, I followed that closely, but had largely the same issues, and once again went back to Arch. Once SCALE was announced I thought my prayers had finally been answered. I followed the development closely and beta tested it a lot…but still ran into many sources of frustration. KVM was usable, but clunky, since they decided to use pure qemu instead of utilizing libvirt, so you were stuck using the webUI; containers were finally supported…but only in terms of Kubernetes which is definitely overkill for a lot of home users, and once you finally got used to the K8S way of doing things…stuff would constantly break. I remember the joy of setting up 20 containers over the course of like 2-3 hours (the container UI really sucked back then) only to discover that an update (to either the container image itself or TrueNAS) completely borked the containers (or possibly K8S itself), requiring me to reconfigure all of them by hand since there was no easy way to just simply redeploy them en masse, if they weren’t all FUBAR, usually a few were, just enough to be a huge annoyance. This happened pretty frequently, at least 2-3x a month. Getting tired of doing this for multiple months, with seemingly no end in sight, I jumped ship back to Arch w/OpenZFS. By that point I was heavily invested in docker and docker-compose since I was jumping back and forth from TrueNAS and Arch pretty often, and setting everything up natively in Arch was becoming just as much of a nightmare as TrueNAS was. I think I tried out SCALE a few months later but it was still a pain.

I continued to run Arch on my server for about a year or two, with no huge issues. I found a closed-source, paid ($60/license/year), ZFS module for cockpit that a few people are working on, and used that for a while which made administration easier…but their development progress is about as slow as molasses in winter. It took about 9 months for them to add support for creating pools in the UI, previous to that it was mostly for information. It was only in February or March of this year that it actually became useful for creating pools. I saw about a week or two ago that SCALE v24 was released and had a lot of huge fixes and additions, so I loaded it up in a VM, messed with it for a while and was like “yeah, this looks good, lets give it a try”, fully intending to keep Arch around because I would most likely be jumping ship again…

I just spent the past 4 days attempting to set everything up…and it’s back to Arch I go because I have made zero progress. Granted about 2 of those day were wasted on attempting to get Cosmos Cloud fully working in a VM (it’s a NAS management webUI that runs entirely out of a docker container, I was also planning on setting it up on a Pi for a friend so I wanted to get experience with it) as a quick backup for when K8S inevitably screws up, but I digress…After scaling my VM from 1 core and 2 GiB of RAM to 40 cores and 32 GiB (I have a Threadripper 2970WX [24 cores, 48 threads] and 128 GB of DDR4 ECC, so no lack of resources) usage spiked from occasionally being around 30-40% CPU usage to 100% on all cores all the time, there must’ve been a memory leak somewhere because ram usage would quickly increase to full. At first I thought it was rclone or docker consuming all the cycles and RAM, but even with those disabled and after a reboot RAM was still maxed out, CPU usage would stay low until I started docker and then it would be pegged at 100%. Also disk access was extremely slow (backlogged), even though I had 3 ZVOLs on a Samsung 970 Evo Plus NVME drive, I had the storage driver set as VIRTIO since that should have been better performance than using NFS for temp data downloaded from Usenet. This happened in both a Debian 12 VM and a Arch VM.

Not wanting to waste anymore time on something that was going to be a fallback solution and was going to be worse performance than running in containers with direct access to the datasets, I decided to give K8S another try since a lot of the issues had been supposedly fixed. I will admit that while the UI has improved drastically, it’s still extremely cumbersome to get an app exposed to the internet via a reverse proxy. In fact, I never even got that far! I spent about the past 3 hours trying to get ONE app to work, using the TrueCharts guide, and I can’t even get the initial SSL cert generator working! I changed the UI ports, (which is just a general annoyance in and of itself, this shouldn’t be necessary if you want to run TrueNAS and a webserver/RP running on the standard ports, I don’t have to do this in Arch if I want to run Cockpit and Caddy from the same NIC/IP, so why do I have to do it in TrueNAS? Also, why can’t I have two NICs in the same subnet? I can in Arch with no issues…), installed Traefik and set it up as documented on TrueCharts, attempted to install Clusterissuer…but it complains about /etc/rancher/k3s/k3s.yaml being group readable (I didn’t change anything) and about namespaces not existing…even though I’m following what the guide says!

Also, the one pool which I didn’t create in TrueNAS had minor issues upon being imported, it mounted itself at /mnt/mnt/storage instead of /mnt/storage like I had it in Arch. Apparently once I created my 4th pool, the UI completely forgot that the storage pool (my first pool) existed since the widget on the homepage or the Storage section didn’t show it at all, even though zpool status shows that it was imported and mounted. The Disks list showed that it existed and the drives were part of that pool though.

I think it’s time to finally accept that TrueNAS will never be in the state that I want it to be in and I’ll just have to pay and wait for Poolsman (the cockpit ZFS module) to reach parity with TrueNAS’ features (it’s about 75% of the way there). I’m a sucker for a nice UI, which is why I always keep coming back to TrueNAS, but everything is extremely cumbersome to do when it shouldn’t be, so for the sake of a nice UI that I’ll use maybe once a month I bash my head against the wall for hours a day, and the time it takes just isn’t worth it. I’ve heard many people say that TrueNAS works great as purely a storage OS (so, CORE or SCALE without the apps setup), and I have to agree, but I don’t really have the space and money necessary to run one server for storage and one for everything else.

Even after Arch is installed, configured, and has my pools imported. I can set up about 20+ containers and secure them with SSL certs in about 5 minutes or less. If I saved the previous config data there’s nothing else I need to do, except maybe change a few mount points. In this case, I’ll have to reconfigure everything because apparently the Disks section didn’t let me know that one of the NVME drives I was about to add to a pool contained an EXT4 filesystem (I have 8 NVME drives, 7 of which are usually used for ZFS).

After writing a freaking essay, I think I’m finally done ranting.


I would suggest investigating TrueNAS Sandboxes with Jailmaker for running docker compose essentially on the metal

Best of both worlds. TrueNAS for Storage. Debian/Ubuntu etc for Docker on top of that storage, without virtualization.

And dockge is a sweet docker compose ui.


Thanks for the rant sharing your experiences.


Arch is a rolling release distro on steroids. It’s not uncommon to have loaded-in-memory the ZFS kernel module for 2.2.0, meanwhile your constant updates land you zfs-utils 2.2.4. This could cause problems with certain features and commands.

Furthermore, OpenZFS’s Linux kernel support doesn’t track with the latest-and-greatest kernel updates from upstream Linux. (Which Arch faithfully wants to grab at every tiny, insignificant point release.) For example, right now OpenZFS supports up to Linux kernel 6.8. There’s going to be a delay after kernel 6.9 is released before OpenZFS supports it. (This is why I’m sticking to LTS kernels on my Arch system.)

Arch Linux also necessitates more reboots than TrueNAS. They need to chill out in regards to kernel updates. No one needs to immediately go from 6.6.24 → 25 → .26 → .27 → .28 every time it’s available from upstream. It’s not the end of the world just because you’re still on x.24 and upstream released x.28.

But I understand your frustrations with TrueNAS (and now SCALE). We’ll have to see how things settle.

Thanks, I’ll give it a look and will probably mess around with it in a VM to see if it’s feasible.

I’ve been running Arch w/OpenZFS for years so I’ve learned to use linux-lts with DKMS in order to avoid this mess haha Too many times I’ve ran an update and the mainline kernel was ahead of the ZFS modules, and I couldn’t import my pools, I’ve finally learned my lesson lol

I update every week or two, pretty much whenever I think about it, this is just my home server, so having a downtime of a few minutes while I reboot isn’t the end of the world :slight_smile:

if i am being honest, my main issue was getting rsync to work easily. or should i say setting up ssh pairing?

e.g. truenas to truenas rsync. you are suppose to ssh pair them, then setup a rsync task in the truenas ui. but i fail to pair at times.

or just first setting up the ssh pair first, so i can then try out zfs replication, but i got stuck at ssh pairing setup.

i’m still learning so maybe i overlooked something. will try again when i have time.

but i had a much quicker and simpler setup when i was using qnap hybrid backup sync. It gets a lot of flaming since it has it’s issues now and then depending on it’s version release. but i feel more comfortable when using it.

so if i had to summarize, it’s just a matter of getting use to truenas way of things and learning how to set this up.

but i thought when setup sh when using truenas assist it would be easier to setup to another truenas. at one point i even got that to work, but now it doesn’t work, so that has been a constant frustration for me figuring out :cry:

the other issue i had was how to handle acls. example, when i setup docker, it worked best when creating the dataset as generic. but i want smb access to be able to access, but whenever i try adding smb to that, i just keep running into issues with permissions for my docker containers. so not sure what to do with that. i couldn’t see much guidance on this :sweat:

20+ containers is a lot.

I haven’t had major issues with k3s. I haven’t tried TrueCharts at all, that’s a 3rd party idea and it’s been, looking from the outside, struggling a little bit keeping up with TrueNAS changes. Partially maybe because of dev attitude on the 3rd party side. I haven’t used it, just see the occasional rant by one of the devs about how TrueNAS is being mean :rofl:. That said I am sure its users are quite happy with it.

You may need to take this very slowly. TrueNAS SCALE without your data, focus step by step on your k3s custom apps, and when that all works how you want it, and you have the setup documented, then think about moving production over.

24.04 needs another release for the UI to work. Wait for that.

When I moved from CORE to SCALE I had to massage my pool on command line; recreate one entirely; and set up SMB from scratch. I only had three apps, and two of those are now running. One is Plex, one is custom FoundryVTT. FoundryVTT needs a workaround with privileged and hostname to retain its license. The third I’ll get around to eventually, it’ll be a music player, custom. FoundryVTT has TLS, but not via a reverse proxy on TrueNAS: It’s a reverse proxy on CloudFlare. If I needed multiple apps running on 443 I’d need to tackle TrueCharts and traefik or reproduce something like it with my own chart.

Appreciate the rant/feedback. I would also recommend you look at the jails/sandboxes as a way of utilizing the UI, but also having that Arch power user flexibility you need. But that said, some good feedback here and I expect you’ll find subsequent updates of TrueNAS continue to sand off those rough edges you’ve experienced in the past.


/walks by whistling with my 34…

It really is unfortunate that iX and the TrueCharts folks can’t work together here. Not only is the TC app catalog much larger (~700 vs. ~100), not only is it kept much more up-to-date, not only are its configuration defaults much more sensible (e.g., the official Pi-Hole app doesn’t listen on port 53 but rather some other port–that’s just dumb), but their charts are also much more featureful. Ingress is always my go-to example, because it’s something I’ve been asking for for eight years; TC integrates this even better than I’d thought to ask for. Authentication? SSO? Yep, those are there as well.

In short, IMO (though for a number of objective reasons, some of which I’ve just mentioned), TrueCharts is a much better app catalog than the official and community apps combined are or are ever likely to be. So it’d be much better for everyone concerned[1] if they could work together. I don’t know where the issue is between them[2], but I wish they’d sort it out.

But to the title question, no, I wouldn’t describe my relationship with TrueNAS as “love/hate”–I have had my frustrations with it over the ~15 years I’ve been using FreeNAS/TrueNAS, some with the product itself and some with the business behind it, but they’ve generally been minor, and really TrueNAS is the only game in town if you want F/OSS and ZFS (I guess napp-it is an option, but it seems you hardly hear it mentioned).

  1. Yes, including iX–apps are really the only reason to use SCALE over CORE (other than that CORE is just on extended life support), so a robust apps catalog is a major benefit to iX ↩︎

  2. Nor do I know that the appearance of animosity between them isn’t exaggerated, but I’ve witnessed iX devs being way too quick to blame TrueCharts in problem tickets–no, folks, it isn’t TrueCharts that’s causing Dragonfish to take 20 minutes to import my pools ↩︎


If the word “True” was not in their name, would they (TrueCharts) still be seen as this complementary project to TrueNAS SCALE?

They present themselves as compatible across different platforms, TrueNAS SCALE being one of them.

In the earlier days of SCALE, I had this image of TrueCharts being to SCALE as what the “Community Plugins catalog” was to Core. But honestly, they seem like a separate entity that can exist without SCALE; nor do they want their project to be dictated based on changes in SCALE.


Note: I specifically disclaim any sort of insider knowledge about anything I say below. I don’t have any direct knowledge about any conversations between iX and TrueCharts or any of their respective representatives. The below represents my opinion based on my experiences and observations of public information.

First, I don’t think we can just set the name to the side, because that’s what they decided to call themselves, and it was for a reason–and that reason was (to my second point) that they started as, and still mostly are, an apps catalog for SCALE. Yes, they support other environments now, but I’d wager the large majority of their app deployments are on SCALE. Even today, the first thing they say about themselves on their site is, "With TrueCharts, users can quickly and easily deploy a wide range of apps and services on their TrueNAS SCALE systems." (emphasis added).

So while I don’t know about “complementary project;” I think that might carry other connotations I wouldn’t agree with, I think they are, and would be seen as regardless of their name, primarily an app catalog for SCALE.

I’ll admit I don’t have any basis for this conclusion other than my own gut, but I don’t think so–if SCALE represents the large majority of their users as I believe it does, they wouldn’t have the users (or therefore, presumably, the funding) without it.

I think you’re correct here, and I think this highlights an inconsistency or tension in their objectives. On the one hand, they want to be free to do their own thing in their own way; on the other (if my assumptions are correct), they have to play nicely enough with SCALE that SCALE users can use their apps without too much hassle, because otherwise most of their user base disappears.

I believe iX has a similar tension–one the one hand they’d probably like to be able to tell TC to pound sand; on the other TC provides a much larger, more featureful, and better-maintained app catalog than they do. And since Apps are the only real reason to use SCALE[1] (at least until iX gives CORE the coup de grâce), that app catalog is going to draw users to SCALE.

Which is why I believe it’s in the best interests of both parties to work together.

  1. Yes, Linux–for those who see that as an advantage. Yes, newer hardware support. Yes, UI improvements–though there’s no inherent reason they can’t backport them to CORE; they just choose not to. The thing that SCALE does (well), and CORE doesn’t (well), is apps/plugins. ↩︎


A lot in general, or a lot to break? I currently have 34 in Arch! I can condense it down to probably about high 20s or 30 because at least two stacks are using their own databases and Caddy instances. Currently only using about 6.5 GB of RAM (excluding the ARC). I’m running the whole Usenet suite of apps (6 containers); Caddy as my reverse proxy; MariaDB, PostGres, Mongo, and their management front-ends; Netdata; Paperless-NGX (stack of 6 containers); Portainer and Dockge for management; Seafile (stack of 6 containers); Plex, Jellyfin (considering switching back to Plex) and a headless Kodi container for use with MariaDB, and stats containers for both Plex and Jellyfin; Netboot-xyz for PXE booting (just messing around with it); wg-easy (for wireguard, never really been successful with it, but I keep trying!); Unifi-Controller to manage my Ubiquity WAP and Switch; Watchtower to automatically update the containers; and finally, Organizr to make them all available from one WebUI… so just a few lol

Back when SCALE first came out, using TrueCharts was kinda necessary since the official selection was pretty limited. Regarding the devs’ attitude (or at least the main guy, I forget what his name is on here)…he’s a bit of a dick and he knows it, I’ve told him to get off his high horse before hahaha

They apparently now recommend against using the integrated UI and instead recommend please use FluxCD for deployments via Helm Charts due to frequent/past breakage. I decided to forgo that because that would have been another hurdle to overcome.

That would only really be possible for me if I virtualized TrueNAS, which just adds unnecessary overhead. I had considered installing Proxomox about two weeks ago and virtualizing both TrueNAS and Arch in order to minimize downtime, but decided against it since it was too much work (install Proxmox, pass through my HBA to a newly created Arch VM, set everything up, import the pools, setup the containers, install TrueNAS, etc…) and I wanted this to be “quick”, we see how that turned out :rofl:

@kris Thanks! I gave Stux’s tutorial a watch and it does seem like the best of both worlds approach, although a bit labor intensive. I’m not gonna tackle that right now since I’ve spent the past 2 days/15 hours or so setting up everything in Arch and don’t wanna make myself jump off the balcony of my apartment :rofl: Once I finally get everything Docker related configured and finally backed up properly (my biggest weakness) it should be a lot less painful…hopefully haha. It would be amazing if you guys could integrate that Jail manager that Stux uses in the video into SCALE somehow, even just a simple “click here to clone the repo to one of your pools, the rest is on you!” . I don’t think I’ll ever give up on attempting to utilize TrueNAS, I think the past 6 or so years has proven that hahaha I just gotta find a happy medium.

@dan I have the exact same amount of containers in Arch haha. I agree that it would be great if TN and TC could work together since as you stated the TC catalogue is about 7x larger and generally has more sane defaults. I think one of the biggest issues is that one of the devs (I forget his name, and mentioned this in my reply to Yorrick) is a bit full of himself and wants everything done his way/thinks his ideas are the best, and isn’t willing to compromise. Meanwhile, the TN devs are just like “I don’t have time for this shit, I have more important things to deal with” hahaha. I’ve definitely gotten into a few arguments with him back when SCALE first came out. He’s kind of like how Linus Torvalds was before his anger management issues were a point of contention lol

If you’re using TrueNAS in an enterprise environment, like you said, it’s pretty much the only game in town (I had never heard of napp-it, but it looks like it’s an EU product and I’m in the US) if you don’t wanna do everything from the CLI. Poolsman (the Cockpit plugin I mentioned originally) is the other option, but that’s only really good for basic management right now and it’s $50/year/machine for a personal license (not sure how much it is for an enterprise license).

Not really sure why that matters; software is software. But it doesn’t seem to have anywhere near the userbase in the homelab community, there only seem to be a handful of YouTube videos about it, and those are (at least) several years old. So it’s been around for a long time, it seems to be under active development/maintenance (it seems, from their website, that the most recent release was just a couple of weeks ago), it does ZFS, but nobody seems to use it. Maybe it’s really only their commercial product?

But if FreeBSD is a niche OS in this world, what about Solaris? And maybe that’s why it seems nobody uses it?

1 Like

I come from openmediavault where everything just works. It’s been a disappointing try of truenas scale. Several problems for me, and no docker. So I guess no :heart: from me.

I hope this doesn’t sound snarky, because that isn’t my intent, but if everything just works with OMV, why are/were you trying TrueNAS?

Just out of curiosity, because truenas is more popular than OMV. I like to test it out myself and compare to what I already know. Not shure I understand why it is on top of that list.

1 Like


napp-it is just a “middleware”/a bunch of scripts for solaris/solarish OSs; it’s developed by a German. (So all OS part is US btw.) Concerning performance and resources it actually beats TrueNAS (over here, that is).

The only reason I switched over to TrueNAS was the license, when Oracle went mad with solaris. I didn’t know much about OpenSolaris/Illumos/OmniOS those days.

Today, if the only alternative was TN Scale, I’d choose OmniOS I guess.

1 Like

Except that iX now has a possible plan B: Adopt jailmaker, make ONE official plug-in to install docker-compose in a “sandbox” by clicking a button in the GUI, optionally with Portainer, dockge or yet-another-container-manager, and pull the plug on k3s, Helm charts and catalogs.
And if there are people who really want Kubernetes on TrueNAS, a second official plug-in to install the full k8s in another sandbox should take care of it.
Users will be happy to copy-paste any docker template they can find on the Net. And I suspect that iX will save developer time by maintaining jailmaker and a grand total of one or two plug-ins rather than a catalog of one hundered Helm charts.


…and completely give up on the idea of plugins/apps entirely. Which was the entire reason for SCALE to exist in the first place[1].

I’d really like to know where the idea came from the SCALE was all about letting people run Docker on their NAS, because nothing from iX ever said so. Exposing anything like Docker has been, at most, a second thought. Is it just that people can’t get over The Release That Must Not Be Named?

I don’t doubt there are. But there are a lot of people who don’t give a damn about the underlying technology; they want a point-and-click installation that works, runs reliably, and updates reliably, to install whatever software they want to run. Copying and pasting a docker-compose.yml file isn’t the same (especially since you almost always need to edit them a bit).

What is this weird parallel universe where everything in TrueNAS SCALE is about Docker?

  1. OK, a big part of the reason; clustering was another big part, but that’s since gone away entirely ↩︎

1 Like

Another reason was automatic “SSD massages”. They exploited a feature with Linux that incessantly swaps to disk, which gives a nice workout for your boot drive. This allows users to rule out a faulty or dying SSD.

Now they want to take that feature away from us as well.