Yeah, I Googled how to re-enable apt, read the warnings, and proceeded anyway. No chance it could backfire on me spectacularly, right?
Iām just stubborn. Plex works fine without a GPU (in fact, Iām leaving it running because the GF has Plex Pass and doesnāt want to deal with Jellyfin. I just really like the FOSS aspect and wanted to get it working.
I actually spent most of this morning figuring out how to passthrough the NVIDIA GPU to a docker, so I wouldnāt have to use the TrueNAS app, and finally got THAT working. So now Iām back to only having Dockge installed in my apps.
TrueNAS attempted to download the Nvidia drivers automatically, but it failed because I had already been doing heavily ill-advised things like running apt commands as root. When I checked the log, it showed the apt command that failed. I manually copied and pasted it into shell and it succeeded.
This is the only error I still get when running the commands above:
**E:** Sub-process /usr/bin/dpkg returned an error code (1)
interesting, might be that you had a package installed that I donāt as Iām getting an error on installing the nvidia drivers. I did however get the drivers to install via the docker update command. Only issue is I canāt install clinfo and jellyfin is still not able to use the GPU even though it shows the option in the GUI. Suppose this is part of the beta experience =D
Probably. I went down a Google wormhole and tried three or four other things from now archived TrueNAS forum posts before finally landing on the solution above. Itās a miracle my system still runs.
Hi guys! First and foremost thanks for working on the integration of docker and itās going pretty well. I am migrating a couple of docker compose files that should create host-path volume for each application. I noticed that it does not create a dataset but it creates a normal directory (which isnāt ideal) I found something interesting on the docker documentation which is: change the underlying driver of docker to use zfs instead overlay2, in this way it will create automatically dataset instead of ordinary folders which are not even displayed in the datasets section of the UI.
Link to the doc: Update driver of docker engine
There are significant problems with having docker dynamically generate datasets. In fact these problems were part of the reason why overlayfs support was added to ZFS. I honestly donāt foresee any situation where weāll go back on that, it was a terribly unmanageable mess of datasets and races.
Yes, I couldnāt test it because you need to āstopā the docker service (I stopped it by unsetting the pool) then I went into the path where the daemon.json file is, and edited it to use zfs, but when you restart the docker service (by setting a pool again) this file was overwritten again
Installed EE and itās working just fine. Zero issues. No problem replicating to a 24.04.2 system either.
Now that the Docker saga is over, I will start with new-to-me-on-my-NAS apps like pi-Hole, which I have experience with using other platforms before branching out to new stuff and rebuilding the SSL distribution system.
Besides that, being able to reconfig the dashboard is a very nice touch.
Iām absolutely stoked for the vdev expansion feature. Literally been lamenting the upfront cost for a decent sized storage solution for home use. I noticed that the data wouldnāt be as efficiently stored if adding drives one at a time as funds allow. But it says that the lost headroom would be recovered over time as data is rewritten. Or you can manually recover this lost storage. My use case for this is mostly Plex and movies. So itās mostly write-once read-many type of situation. Does simply reading the data fix this lost headroom, or what does the manually option to recover that look like? Is it like resilvering essentially? Thanks in advance for this noobs question.
IIRC, there is no built-in method to āre-balanceā pool contents such that all data and parity is redistributed from the extant pool drives to the newly-added one. Instead, I believe the new drive is hammered re new write content until it is as full as the rest of the pool and then data once more broadly distributed again.
This is why data centers and so on just add VDEVs (or blocks of drives) rather than individual drives to extant pools. Or destroy the pool after replicating it, reformat the drives in a new group, then recreate the pool from backup onto the larger set of drives.
One approach to rebalance files is to use the rebalance script that I link to in the sVDEV resource page. It basically takes a file, copies it, verifies the new copy is good and then deletes the old file. I used it to force small files into my newly-added sVDEV. The end result is likely good enough even if destroying the pool and rebuilding it from the ground up is likely a more performant approach.
Just rebuilt one of my systems from Core 13.3 to EE. Iāve never been able to sustain a write speed when I dump VMās over 3.6gb with EE we are sitting at 8.
The other thing iām noticing is that my free memory on core (I have 1TB of ram in my systems) is sitting at 60 GiB, where as my core sits at 295 GiB.
So as I put some load on this system iāll give some updates next week, but so far for an nfs storage server itās looking pretty damn good!
IMO There really should be a way to rebalance data already in situ within the GUI, that is to say, a complete solution would have a method of removing this inefficiency and reduced the increased risk/performance degradation.