All of this did not work, still slow and freezing after 30min killing the NAS, so I went one step further: I set up a ubuntu VM on truenas, mount storage with NFS, installed docker, and ran the same qbittorent image.
Same issue. htop shows no high cpu/ram/storage usage, same for htop inside of the container. nothing special. logs shows nothing. No Truenas ui warnings
So after a few days, desperate I re-installed cobia, redeployed the image and… it just runs, great performance, great ui responsiveness.
This is 100% a dragonfish OS issue. I suspect it is storage related as it is an IO heavy app and it slows to a crawl. Pretty bad for a NAS OS… All other apps run fine.
So here is my warning if you run qBittorent, don’t upgrade, you won’t be able to fix it.
I would make a ticket but with no logs good luck, way to repro is to just deploy the above images, load some torrents and see how it runs. Do the same on Cobia and compare.
I’m on dragonfish and my qbittorrent app from truecharts runs just fine with max download speed. Just tried with a manjaro linux iso… I currently have no vpn set up so can’t test if it’s maybe vpn related…
IMO warning users that all versions (his title of the post) of qbittorrent don’t work if his particular version of qbittorrent doesn’t work is kind of exaggerated. Hence my comment that other versions work on dragonfish.
Are you’re system swapping since Dragonfish with qbittorrent ?
Since dragonfish ZFS ARC is not limited to 50% of memory usage like before, for me this resulted in heavy swap usage after 24H of uptime. I had to set a limit for ZFS cache to 50% of my total RAM like before the upgrade to make everything work again as expected, no more swap, no more WebUI unresponsive etc.
In your qBittorrent web UI, go to Help → About → Software Used
Here is mine for reference on TrueNAS Core, within a FreeBSD jail based on 13.2-RELEASE:
What is the version of libtorrent-rasterbar for your App?
qBittorrent packaged for FreeBSD uses libtorrent version 1.2.x by default.
Linux distros, including maintainers for containers, TrueCharts, iXsystems, etc, likely compile it for libtorrent 2.x, which was known (and still is known?) to cause issues with ZFS/ARC.
If so, you might be able to submit a request to TrueCharts or iXsystems (or whomever) to include a version of the qBittorrent app compiled against libtorrent 1.2.x.
EDIT: Fun fact. Even upstream qBittorrent offers their application built for libtorrent 1.2.x as the default download. If you want it with libtorrent 2.x? You have to purposefully choose the lt20 binary from the download options.
EDIT 2: One thing you can try in the meantime is to set a maximum RAM usage for your qBittorrent instance, which is a parameter used by libtorrent 2.x.
Go to Tools → Options → Advanced, then set “Physical memory usage limit” to a sane value. (This option is ignored if you’re using libtorrent 1.2.x.) By capping libtorrent’s memory usage, you might be able to (temporarily) postpone your issue with heavy swapping and system slowdowns.
As for aggressively swapping memory to disk? This was one of my concerns with “tweaking” SCALE’s parameters (Linux) to allow for a higher ARC ceiling. I remember iXsystems claiming there were no issues in their testing. But I’m just enjoying life on TrueNAS Core (FreeBSD) while it’s still being supported for now.
Qbittorrent from Truecharts seem’s to be with libtorrent 1.2.x by default, this is my case, so the issue is not here for me.
Do i realy need a lot of ZFS Cache ? Dosent seem’s to me, until i’ve restored the 50% cap for ZFS Cache i have no issue at all.
But yes, if i understand it correctly, ZFS cache should release some RAM in response to system need, i don’t understand why the system is swapping when ARC is not limited to 50% of the RAM … In fact, with CORE this worked correctly witouth tweaking anything.
Exactly same for me. Since Dragonfish, Truena is going to swap 1.4 gb in 24H uptime resulting in unresponsive WebUI. As you, i solved this by settings the ARC limit at half my total RAM …
But for me it’s not a big deal if i can’t use max memory for ZFS cache. I just don’t understand why Ix have pushed this new limit when we see this behavior in real world
Because one of ZFS’s greatest features, which end-users will notice day-to-day, is the ARC. Sheer performance gains, whether data or metadata, and less stress on the storage devices.
FreeBSD’s memory management works seamlessly with the ZFS ARC.
Linux on the other hand? No so much. Hence, why it is limited to 50% of available physical RAM: to prevent such issues as you are experiencing. (Default set by upstream OpenZFS on Linux.)
But it’s such a shame that the ARC is being arbitrarily limited to 50% of RAM, in which it can shine without any restrictions (as evidenced by FreeBSD.) What’s the solution for Linux? Tweak a parameter to allow the ARC to use up as much memory as is needed, just like on FreeBSD!
And now you see that at the end of the day, it’s still inferior to ZFS on FreeBSD.
But don’t worry. SCALE’s got “Apps” and K3s and TrueCharts. And don’t forget, Linux is “super cool”.
But yes, i love Linux for many aspects. Ok ZFS seem to not work as good as FreeBSD in regards of ARC. But like i said, it’s not a big deal for me in my use case.
Every system have their pros and cons, Linux kernel didn’t become shit juste because of this ZFS behavior man … And FreeBSD didn’t become shit just because they don’t have K3s or whatever. It’s just matter of needs, use case, personal preferences etc etc.