NVIDIA compatible driver test for TrueNAS 25.10 “Goldeye”

TrueNAS 25.10 now uses the NVIDIA open GPU kernel modules with the 570.172.08 driver. This enables TrueNAS to make use of NVIDIA Blackwell GPUs - the RTX 50-series and RTX PRO Blackwell cards - which many users have requested support for.

The NVIDIA 50-series Blackwell cards require the use of the new open GPU kernel module, but several of NVIDIA’s older generations of GPUs - including the Maxwell, Pascal, and Volta generations - lack the GPU System Processor (GSP) module on their silicon in order to leverage the open kernel module, and thus will no longer function. This includes the GTX 700-series, 900-GTX series, GTX 10-series, the Quadro M-series and P-series, and Tesla M-series and P-series cards.

I modified the official repository’s scale-build to switch back to the proprietary driver in order to support these soon-to-be-deprecated cards, and uploaded the compiled driver to truenas-nvidia-drivers.

running TrueNAS 25.10 beta with an NVIDIA GPU can test this driver and share feedback on how it performs.

My machine has a Tesla P4 installed. It handles real-time transcoding in Plex and Jellyfin, runs Immich’s machine-learning features (transcoding, smart search, face recognition), and Beszel’s monitoring (power draw, utilization, VRAM) all at once—everything runs smoothly.

nvidia-kernel-module-change-in-truenas-25-10-what-this-means-for-you

4 Likes

It would be interesting to see if the same driver also supports the newer 50-series cards. Any testing would be appreciated.

If I understood correctly: no.
The new cards need the new driver.

It won’t, and never will by NVIDIA’s design:

truenas kernel: NVRM: The NVIDIA GPU 0000:d8:00.0 (PCI ID: 10de:2d04)
NVRM: installed in this system requires use of the NVIDIA open kernel modules.

So while this module will allow everything from the GTX750 to the RTX4090 to function, the buck unfortunately stops there.

It also means that a mix-and-match of a 50-series + 10-series has to make a decision on which to isolate and which to run on the host. (And no, “load both” isn’t an option - you can only bind a single kmod at a time.)

2 Likes

On a software update… I assume you would have to manually create a update file for each version and then use that?

I assume you will verify that with RC.1…

Just to confirm this currently works for me using my 1080ti and Goldeye 25.10 Beta 1

Less steps than using a Google Coral

Yes, I don’t plan to replace the P4 in the short term; during this period I’ll compile after system updates and verify whether it still works on the new version.

3 Likes

Yeah, that’s basically it. I just use a local Gitea Action to pull the official GitHub release, build/package it, and generate a .update file—been running custom drivers this way for over a full major release with zero manual steps after updates.

Most folks probably stick with the official ISO and load community-built NVIDIA sysext file, but that still means a bunch of steps after every update (unmerge, unlock usr, overwrite NVIDIA, merge again, reboot/reload kernel module). If the official release supported picking custom sysext files at boot, the community would only need to handle packaging, and the whole process would be way simpler.

1 Like

Could that extend to allow a single release to run with different sysexts, and different personalities, depending on user preference?

You are welcome to make a feature request.

I will warn that my QA lead will complain that its untestable…unless you can think of a clever approach. We do worry about less skilled users hitting issues and then causing a waste of time… both community and ours. Any safety belts that reduces this is attractive.

Yeah, it’s definitely hard to define a fully safe/stable way, since sysext contents are basically uncontrolled and very environment-dependent. My idea was just to let each boot menu entry have its own custom sysext config, and not carry it over after upgrades — that way there’s no system/kernel mismatch.

As for the “less skilled users” concern, I think the simplest way is to keep it API-only with a clear “no support” warning.

Custom sysexts would, IMO, require an equivalent to “developer mode” - you’re basically dropping an overlay filesystem on top of TrueNAS, with the requisite dangers of that.

1 Like

Fair enough.

The issue is that the user base is going to split in four:

  1. No GPU on my storage.
  2. Use an Intel/AMD GPU.
  3. Use an “old” Nvidia GPU (transcoding).
  4. Use a new Nvidia GPU (local AI?).

1 and 2 do not care either way. 3 and 4, as I understand, cannot be served simultaneously.
Short to medium-term, the path of least overall annoyance to users is probably to forego the new driver and keep the old one. Medium to long term, the old cards will fade away, so the new driver will have to prevail. Providing TrueNAS with an official choice between two possible sysexts increases the burden of testing and validating—for little benefit on the Enterprise side of things.
Pick the lesser evil…

2 Likes

Working on RC.1

Hi, thank you very much for your work. Would it be possible for you to share the compiled file with the community, so that we can perform the update manually by selecting your update file? Thanks a lot for your contribution!

So long as it’s not Battlemage. As per the thread currently just above this one, B580’s don’t work and might work in 2026.

Good grief, these transcoding GPUs in TrueNAS are like mating giant pandas…such a narrow window of acceptable conditions to work in!

I’m afraid of breaking everything (isn’t there already a prebuilt update file?)

Yes there is

You can manually replace the driver extension package, or if you prefer convenience, upgrade using the update file.