TrueNAS SCALE 24.10-BETA.1 is Now Available!

Yeah, I Googled how to re-enable apt, read the warnings, and proceeded anyway. No chance it could backfire on me spectacularly, right? :slight_smile:

Iā€™m just stubborn. Plex works fine without a GPU (in fact, Iā€™m leaving it running because the GF has Plex Pass and doesnā€™t want to deal with Jellyfin. I just really like the FOSS aspect and wanted to get it working.

I actually spent most of this morning figuring out how to passthrough the NVIDIA GPU to a docker, so I wouldnā€™t have to use the TrueNAS app, and finally got THAT working. So now Iā€™m back to only having Dockge installed in my apps.

yes, a gpu widget, at least for nvidia was planned, as far as i can remember the post from the old forum.

1 Like

Did you run into any issues with the following when attempting to install the nvidia drivers?

Error! Bad return status for module build on kernel: 6.6.44-production+truenas (x86_64)
If so, how did you solve it?

The only error I encountered was in the TrueNAS GUI. After running:

midclt call -job docker.update ā€˜{ā€œnvidiaā€: true}ā€™

TrueNAS attempted to download the Nvidia drivers automatically, but it failed because I had already been doing heavily ill-advised things like running apt commands as root. When I checked the log, it showed the apt command that failed. I manually copied and pasted it into shell and it succeeded.

This is the only error I still get when running the commands above:

**E:** Sub-process /usr/bin/dpkg returned an error code (1)

interesting, might be that you had a package installed that I donā€™t as Iā€™m getting an error on installing the nvidia drivers. I did however get the drivers to install via the docker update command. Only issue is I canā€™t install clinfo and jellyfin is still not able to use the GPU even though it shows the option in the GUI. Suppose this is part of the beta experience =D

Probably. I went down a Google wormhole and tried three or four other things from now archived TrueNAS forum posts before finally landing on the solution above. Itā€™s a miracle my system still runs. :rofl:

How can apps be added/contributed to the community train?

By submitting a pull request on github.

2 Likes

Hi guys! First and foremost thanks for working on the integration of docker and itā€™s going pretty well. I am migrating a couple of docker compose files that should create host-path volume for each application. I noticed that it does not create a dataset but it creates a normal directory (which isnā€™t ideal) I found something interesting on the docker documentation which is: change the underlying driver of docker to use zfs instead overlay2, in this way it will create automatically dataset instead of ordinary folders which are not even displayed in the datasets section of the UI.
Link to the doc:
Update driver of docker engine

I assume you havenā€™t tested it?
Weā€™d have to verify that it migrates from previous releases.

There are significant problems with having docker dynamically generate datasets. In fact these problems were part of the reason why overlayfs support was added to ZFS. I honestly donā€™t foresee any situation where weā€™ll go back on that, it was a terribly unmanageable mess of datasets and races.

5 Likes

Yes, I couldnā€™t test it because you need to ā€œstopā€ the docker service (I stopped it by unsetting the pool) then I went into the path where the daemon.json file is, and edited it to use zfs, but when you restart the docker service (by setting a pool again) this file was overwritten again

1 Like

Installed EE and itā€™s working just fine. Zero issues. No problem replicating to a 24.04.2 system either.

Now that the Docker saga is over, I will start with new-to-me-on-my-NAS apps like pi-Hole, which I have experience with using other platforms before branching out to new stuff and rebuilding the SSL distribution system.

Besides that, being able to reconfig the dashboard is a very nice touch.

1 Like

One curiosity I encountered is the following CRITICAL error re: the boot pool that I have a hard time interpreting:
Screenshot 2024-09-16 at 09.06.37
What did I do wrong, how did I trigger this issue, and how do I fix it?

I did not encounter this error in Dragonfish.

Iā€™m absolutely stoked for the vdev expansion feature. Literally been lamenting the upfront cost for a decent sized storage solution for home use. I noticed that the data wouldnā€™t be as efficiently stored if adding drives one at a time as funds allow. But it says that the lost headroom would be recovered over time as data is rewritten. Or you can manually recover this lost storage. My use case for this is mostly Plex and movies. So itā€™s mostly write-once read-many type of situation. Does simply reading the data fix this lost headroom, or what does the manually option to recover that look like? Is it like resilvering essentially? Thanks in advance for this noobs question.

IIRC, there is no built-in method to ā€˜re-balanceā€™ pool contents such that all data and parity is redistributed from the extant pool drives to the newly-added one. Instead, I believe the new drive is hammered re new write content until it is as full as the rest of the pool and then data once more broadly distributed again.

This is why data centers and so on just add VDEVs (or blocks of drives) rather than individual drives to extant pools. Or destroy the pool after replicating it, reformat the drives in a new group, then recreate the pool from backup onto the larger set of drives.

One approach to rebalance files is to use the rebalance script that I link to in the sVDEV resource page. It basically takes a file, copies it, verifies the new copy is good and then deletes the old file. I used it to force small files into my newly-added sVDEV. The end result is likely good enough even if destroying the pool and rebuilding it from the ground up is likely a more performant approach.

1 Like

Just rebuilt one of my systems from Core 13.3 to EE. Iā€™ve never been able to sustain a write speed when I dump VMā€™s over 3.6gb with EE we are sitting at 8.

The other thing iā€™m noticing is that my free memory on core (I have 1TB of ram in my systems) is sitting at 60 GiB, where as my core sits at 295 GiB.

So as I put some load on this system iā€™ll give some updates next week, but so far for an nfs storage server itā€™s looking pretty damn good!

1 Like

It should be pretty easy to ā€œrewriteā€ a media dataset.

Conceptually, all that needs to be done is moving all the media from one dataset to another.

(Which can be done easily with mv *

An alternative is to use the rebalance script.

1 Like

IMO There really should be a way to rebalance data already in situ within the GUI, that is to say, a complete solution would have a method of removing this inefficiency and reduced the increased risk/performance degradation.

Some TrueNAS history.

Angelfish started with the docker ZFS driver. It was slow and buggy and created thousands of unnecessary snapshots.

Bluefin switched to overlayfs and resolved most of the issues.

Catalog Apps automatically get their own dataset. (a good reason to use them)

Apps deployed out of the catalog need manual configuration.

2 Likes