NVIDIA Kernel Module Change in TrueNAS 25.10 - What This Means for You

That’s unfortunate you are not able to replicate my issue. I have tried updating the BIOS to the latest and tried a number of BIOS settings all without success.

I have been following the discussions below regarding this for reference. They are not exactly my issue but worth a try.

Just shots in the dark, but I assume you tried:

  • REBAR on/off
  • 4G decoding on/off
  • CSM enabled/disabled

I’m seeing these referenced in the context of NVIDIA tickets and forum threads, where a particular BIOS combination of patch level and these features will make the open/GSP firmware not load.

Yes I have tried turning on and off those settings but it doesn’t have any effect.

When attempting to update the UEFI firmware in Windows, I used the 591.74 which worked just fine. In the past with Docker on plain Debian 12 with older cards, I have had similar issues with Nvidia cards coming up that using certain versions of drivers helped.

I ended up getting my PNY RTX 5060 ti detected with nvidia-smi and I am now able to use Frigate with the GPU. I needed to generate my own nvidia.raw file that is compatible with TrueNAS 25.10.1 including the kernel, middleware, and many other components.

I found that following the guide from TrueNAS Build Nvidia vGPU Driver extensions (systemd-sysext) - HomeLabProject really helped but I did need to make a change to the extensions.py to force it to download and configure the 580.105.08 driver, which is the same as the development build for TrueNAS SCALE 26.

Below are the exact steps I followed. Since I changed the default bundled driver, I expect this to be unsupported and done at my own risk. It does however show updating the driver fixed my card making it useable with TrueNAS. The good news is I was able to make a backup copy of the original nvidia.raw file, which is 570.172.08, so reverting is pretty straight forward with the steps below by just replacing the file being copied.

I ran the build steps in a separate Debian 12 VM, not on TrueNAS.

sudo apt update

sudo apt install build-essential debootstrap git python3-pip python3-venv squashfs-tools unzip libjson-perl rsync libarchive-tools

mkdir ~/nvidia_build
git clone -b TS-25.10.1 https://github.com/truenas/scale-build.git
cd scale-build
export TRUENAS_TRAIN="TrueNAS-SCALE-Goldeye"
export TRUENAS_VERSION="25.10.1"
export PATH=$PATH:/usr/sbin:/sbin

nano scale_build/extensions.py

Find and modify the download_nvidia_driver function (around line 200-300, depending on version):
Replace the entire function with this (adapts for standard open driver download)

    def download_nvidia_driver(self):
        version = "580.105.08"
        prefix = "https://us.download.nvidia.com/XFree86/Linux-x86_64"
        filename = f"NVIDIA-Linux-x86_64-{version}.run"
        result = f"{self.chroot}/{filename}"
        self.run([
            "wget", "-c", "-O", f"/{filename}", f"{prefix}/{version}/{filename}"
        ])
        os.chmod(result, 0o755)
        return result

Then run the following. This took me about an hour to complete.

make checkout
make packages
make update

mkdir -p ./tmpfile/rootfs
sudo mount ./tmp/update/rootfs.squashfs ./tmpfile/rootfs

ls -al ./tmpfile/rootfs/usr/share/truenas/sysext-extensions/nvidia.raw
sudo cp "~/nvidia_build/scale-build/tmpfile/rootfs/usr/share/truenas/sysext-extensions/nvidia.raw" ~/nvidia_580.105.08.raw

sudo umount ./tmpfile/rootfs
rmdir ./tmpfile/rootfs

Enable SSH on TrueNAS and enter perform the following.

Download your nvidia.raw file and upload to your target TrueNAS system in SSH.

  1. If you checked Install NVIDIA Drivers on the settings panel

sudo systemd-sysext unmerge

  1. You need to make the /usr dataset writable

sudo zfs set readonly=off “$(zfs list -H -o name /usr)”

  1. Backup the old one and copy the new one.
sudo mv /usr/share/truenas/sysext-extensions/nvidia.raw /usr/share/truenas/sysext-extensions/nvidia.bak

sudo cp “~/nvidia_580.105.08.raw” /usr/share/truenas/sysext-extensions/nvidia.raw

  1. Then, set the /usr dataset back to read-only

sudo zfs set readonly=on “$(zfs list -H -o name /usr)”

  1. After you’ve copied the file, simply run

sudo systemd-sysext merge

Reboot TrueNAS

So I also just installed a 5060 Ti model and finally got the built-in drivers to work! At least I think they are working. I am using them with Immich “app” and had to check the box in app settings to install drivers, then in Immich app settings I was able to click use this card!

Curious why you had to install different drivers? Or maybe this is something different?

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.172.08             Driver Version: 570.172.08     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+

@amoeller @banshee28 can you both give me the full output of nvidia-smi -q in a DM/text file? Go ahead and redact the Serial Number field as I don’t need or want that. :slight_smile:

Sure thing. Just sent you a DM.

Thanks to both of you. From what I can tell the only material/hardware-level differences are the VBIOS version:

@amoeller has 98.06.4E.00.17 which didn’t work with the inbox 570 driver
@banshee28 has 98.06.39.00.E3 which did work.

I’d be rather annoyed if NVIDIA released a hardware/VBIOS revision that’s incompatible with earlier drivers, but I wouldn’t be surprised.

1 Like

My mate Claude I intend to try this next week to see if I get mig enabled :slight_smile: thanks!

Oh the 3rd party vendors do that all the time, for example the MIG thread on the nvidia developer forums.

I’m currently running TrueNAS 25.04 with an NVIDIA Quadro P2200 (Pascal) installed.

For your impact assessment around 25.10, I want to add a simple data point: there are still active TrueNAS users running Pascal Quadro cards in real systems.

The fact that older NVIDIA generations are still present in deployed systems seems worth tracking in its own right. Support for those cards is also, at least in part, a matter of which driver branches TrueNAS chooses to make available, rather than simply whether the hardware still exists in the field.

Also, if GPU model or generation is not currently included in the anonymous hardware telemetry, it might be worth considering adding that. It would probably give you a clearer picture of how many systems are still running older NVIDIA generations such as Pascal, and help inform future support decisions.

I would also like to explicitly request that TrueNAS consider making multiple NVIDIA driver branches available, so that systems with older but still actively used GPU generations such as Pascal are not excluded unnecessarily.

1 Like

It was summed up in a rejected Feature Request

It was summed up in a rejected Feature Request

This is fustrating, IMO.

While I understand the change, the feature request by my understanding would be an option to select a driver to use. The summary of it being closed does not address such an option, just what the change was that broke old/legacy hardware, therefore, it does not address the feature request.

This feature request should still stand to create a feature for different/multiple drivers. IMO…

The problem is that a community change request may not have the cost/benefit ratio to make it viable to support, even if it is a good idea.

You always do have the option of enabling dev mode & getting any driver you want, nvidia or other. If nvidia isn’t supporting older cards I get why iX is also dropping official support. Adding a GUI to select older drivers would be official support. If iX then adds a warning “older drivers not supported, use at own risk” would then be little different than enabling dev mode & installing them yourself.

1 Like

The drivers are shipped in the TrueNAS installer/ISO, so would add 400-500mb of storage per driver or create multiple ISOs, and it’s not like something they use on the Enterprise side, so basically would have to be goodwill to the community to add stuff that’s already EOL

While there are constructive discussions on yours and @Fleshmauler points regarding said feature requires. These should all be apart of the feature request itself, however IMO the feature was closed because of a related actions and non of your points, nor a clear reason was stated for the closure.

My need went well beyond what @zzzhouuu did (thanks it was essential!) - with that invaluable breadcrumb trail and my mate claude i got a solution that:

  1. uses github workflows to build truenas sysext environment (using free runners)
  2. auto checks for new nvdia open drivers and runs workflow
  3. lets me run a workflow for any truenas build or nvida driver version using a UI
  4. injects displaymodeselector in sysext at install time (not included in package as it has license agreement that prevents distribution)
  5. defines MIGs at each boot using systemd service in the sysext (WIP)
  6. assigns one compute MIG UUID to all containers with UUIDs cofnigured (WIP)
  7. auto re-application of sysext package on truenas update (WIP)

and i got this far after 72 hours…..

yes this is a blackwell 6000 pro workstation card with 4 MIGs (2 compute MIGs, 2 GFX migs)

@HoneyBadger please can ix-systems consider includig 580.x in the fall release so i don’t have maintaing the github repo i just had to create, these are the open drivers so should work just fine for my need. :slight_smile: oh an please include all normal user mode tools except displaymodeselector which i know you cant include (and include persistencd daemon)

truenas_admin@truenas1 ~ 17:45:13 $ sudo nvidia-smi
Sat Mar  7 17:45:15 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.126.18             Driver Version: 580.126.18     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 6000 Blac...    Off |   00000000:0F:00.0 Off |                  Off |
| 30%   34C    P1             81W /  600W |    2681MiB /  97887MiB |     N/A      Default |
|                                         |                        |              Enabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| MIG devices:                                                                            |
+------------------+----------------------------------+-----------+-----------------------+
| GPU  GI  CI  MIG |              Shared Memory-Usage |        Vol|        Shared         |
|      ID  ID  Dev |                Shared BAR1-Usage | SM     Unc| CE ENC  DEC  OFA  JPG |
|                  |                                  |        ECC|                       |
|==================+==================================+===========+=======================|
|  0    3   0   0  |              64MiB / 24192MiB    | 46    N/A |  1   1    1    0    1 |
|                  |               0MiB /  8327MiB    |           |                       |
+------------------+----------------------------------+-----------+-----------------------+
|  0    4   0   1  |              64MiB / 24192MiB    | 46    N/A |  1   1    1    0    1 |
|                  |               0MiB /  8327MiB    |           |                       |
+------------------+----------------------------------+-----------+-----------------------+
|  0    5   0   2  |            2489MiB / 24192MiB    | 46    N/A |  1   1    1    0    1 |
|                  |               0MiB /  8327MiB    |           |                       |
+------------------+----------------------------------+-----------+-----------------------+
|  0    6   0   3  |              64MiB / 24192MiB    | 46    N/A |  1   1    1    0    1 |
|                  |               0MiB /  8327MiB    |           |                       |
+------------------+----------------------------------+-----------+-----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0    5    0            33175      C   frigate.detector.onnx                   586MiB |
|    0    5    0            33177      C   frigate.embeddings_manager              470MiB |
|    0    5    0            34614      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          188MiB |
|    0    5    0            34640      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          186MiB |
|    0    5    0            34650      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          186MiB |
|    0    5    0            34654      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          188MiB |
|    0    5    0            34659      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          190MiB |
|    0    5    0            34664      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          190MiB |
|    0    5    0            34669      C   /usr/lib/ffmpeg/7.0/bin/ffmpeg          190MiB |
+-----------------------------------------------------------------------------------------+

truenas_admin@truenas1 ~ 17:23:04 $ nvidia-smi -L
GPU 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition (UUID: GPU-a1d956ee-5f28-8140-014f-234454549cfa)
  MIG 1g.24gb     Device  0: (UUID: MIG-f54d3804-de1c-5d53-b2df-189722b5c186)
  MIG 1g.24gb     Device  1: (UUID: MIG-dfe0cf87-9354-5326-a6a7-187f0c7ab4d3)
  MIG 1g.24gb     Device  2: (UUID: MIG-a0496dd0-fa2c-5546-953c-f6b5a63a8a1e)
  MIG 1g.24gb     Device  3: (UUID: MIG-87144e2e-17bb-5983-95c6-b84fa3d63613)

truenas_admin@truenas1 ~ 17:45:15 $ sudo nvidia-smi mig -lgi
+---------------------------------------------------------+
| GPU instances:                                          |
| GPU   Name               Profile  Instance   Placement  |
|                            ID       ID       Start:Size |
|=========================================================|
|   0  MIG 1g.24gb           14        5          6:3     |
+---------------------------------------------------------+
|   0  MIG 1g.24gb           14        6          9:3     |
+---------------------------------------------------------+
|   0  MIG 1g.24gb+gfx       47        3          0:3     |
+---------------------------------------------------------+
|   0  MIG 1g.24gb+gfx       47        4          3:3     |
+---------------------------------------------------------+

this method works to assign the MIG to a docker container

and now i have a build sysext build system…. hailo8 driver next :slight_smile:

and moving my truenas out of proxmox to bare metal

Congratulations on solving the problem! However, it seems GitHub’s free tier isn’t sufficient for compiling TrueNAS, right? I run all my compilation tasks locally, even though the repository provides configuration files for GitHub Actions.

lets switch to DM :slight_smile: but yes its possible, requires careful work, acceptance that multiple runs might be needed to cache all the package builds oh and fix a couple of bugs in the make file - only real blocker is 3 storage packages dont seem compilable in the VM, but creating dummy deb packages for those 3 fixes that (they are not needed for sysext creation)

my solution was predicated on getting runners working and i like banging my head against the wall until the wall goes ouch :wink: