Latest Nvidia drivers

570.153.02 should be merged now, which does support the 5060Ti, but the statement about using nightlies in production - that largely being “don’t” - remains.

@TrueUser I don’t believe containerization will not sufficiently isolate the GPU to let you load a different driver stack; possibly if it was isolated in the host and then later claimed in the instance, but I don’t believe so - but a VM with the GPU isolated from the host will work.

1 Like

God DAMN that is annoying I just bought a new system with the very plan of running a combined local AI/TrueNAS system.

Makes you wonder if ixSystems should move their release dates to a month after nVidias so this sort of thing doesn’t happen - either that or users light a fire under nVidia to get things done quicker.

I’ve now got $6K+ worth of gear which is going to rot for want of drivers.

Depending on your risk tolelrance, you could enable dev mode & update the drivers yourself.

It depends on what you mean by risk tolerance…
If it means risking the data on the NAS then no way, if it means risking the data in a VM instance if it dies, that is no big as it’s disposable.

Enabling dev mode on your system does not immediately break anything, it just removes a number of safeguards. Previous versions of the NVIDIA driver install script would promptly break the TrueNAS middleware if installed with default settings.

In your case I’d look at setting up a VM (or multiple) with the unsupported GPUs isolated to them, and running your AI/LLM loads in there. 25.10 is definitely still in a very early state and I wouldn’t recommend it for daily driving outside of those developing against TrueNAS directly, rather than “developing things ON TrueNAS.”

1 Like

Sorry if it is a dumb question, but:

Currently I have a very simple server where I am running Immich and Plex apps running without any issues and I want to add a GPU (Nvidia GTX960, which does not require legacy drivers) with passthrough for VMs (specifically Win10 VM) and I don’t need to use the GPU for the Immich or Plex apps but only for VMs.

I am trying to follow what you wrote but I am new to this and having some issues running the Nvidia driver install. Could you please help? (I know it is an old thread, but I hope you might be able to help me)

I am getting the following error:

Temporary directory /tmp is not executable - use the --tmpdir option to specify a different one.

The Nvidia driver I am trying to install is NVIDIA-Linux-x866_64-580.76.05

If I can get the driver to work I can pass it through to Vm through the web UI I hope.

Thank you

If you only need the GPU for VMs, then do not install any driver at the host level as it won’t be used.

Use the System → Advanced menu to isolate the GPU/PCIe device, and assign it to the virtual machine. Inside the Windows 10 machine, you can then install whichever driver you choose.

Hey HoneyBadger,

Thank you for your reply,

The problem is, even though the GPU is phsically installed in the server, on TrueNAS shell nvidia-smi can not see it (command not found) and when I try to choose it in Isolated GPUs under Advanced Settings, it shows “Unknown 0000:02:00.0 slot” and if I try to choose that it gives me an error:

[EINVAL] gpu_settings.isolated_gpu_pci_ids: 0000:02:00.0 GPU pci slot(s) are not available or a GPU is not configured.

and more info says this about the error:

Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 211, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1529, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1460, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/system_advanced/gpu.py", line 44, in update_gpu_pci_ids
    verrors.check()
  File "/usr/lib/python3/dist-packages/middlewared/service_exception.py", line 72, in check
    raise self
middlewared.service_exception.ValidationErrors: [EINVAL] gpu_settings.isolated_gpu_pci_ids: 0000:02:00.0 GPU pci slot(s) are not available or a GPU is not configured.

That’s why I thought I need TrueNAS to see the GPU properly first so that it can pass through.

PS: The install NVidia drivers tickbox is not selected.

Edit:
I also just realized checking from lspci that 02:00.0 is integrated graphics?!
Wait, I am having an issue because TrueNAS takes GTX960 as the primary and releases the integrated for available passthrough?! WHAT??

I am pretty sure in BIOS settings I already set integrated as the primary output device as suggested, should I switch it back to Auto ?

Thank you

Try this, I wrote in my original post:

mount -o remount,exec /tmp

Thanks for the reply iamtienng,

I originally tried that, and after that I try to run:
sh ./NVIDIA-Linux-x86_64-580.76.05.run

but this gives the error in my first reply.

I am not even sure if I need the driver as HoneyBadger suggests that I don’t even need it for VM.
In your case you are giving the GPU to an app, where in my case I am trying to pass it to VM.

I might be wrong about this but I think apps are part of TrueNAS where the GPU is in a way used/handled by TrueNAS but in passthrough for VMs it is not.

I am really confused as my problem suggests that I can’t get the GPU to be visible in the passthrough / isolation dropdown even though I can see the GPU recognized in lspci.

Please let me know if you know anything else that I can try.
Thank you

A full output of the system specifications - possibly in another thread? - might be the best way forward here, but the GTX960 does not require new drivers to run with Apps, and should not need any driver on the host at all to work in a VM. lspci -k should show that the GTX960 is using the vfio driver which is what’s required for a VM passthrough.

Hello,
Thanks again for your reply!

Here is my post from a few weeks ago:

I made a few updates to it so the last comment from me is how it currently is and I am open to try any suggestions.
I hope you can help me, please let me know if you have any other commands to run and see the output.

Thank you

Hi.

I’m in the new boat as OP. I have a 5060 purchased for Transcoding. I already have a P2000 in there, and I wanted to try out the 9th gen NVDEC to see if it had more luck with HEVC playback.

It looks like based on what others wrote I’d need to wait until October, assuming it’s a stable build. I’d rathe not rush to nightly for something with content on it.

04:00.0 VGA compatible controller: NVIDIA Corporate GB206 [Geforce RTX 5060) (Rev a1)