Electric Eel Nvidia gpu passthrough support

When installing apps on the nightly build, gpu pass-through shows this.

Is nVidia gpu pass through supported or planned. And are there any workarounds.

If your GPU is Nvidia please see my post for solutions

@MG-LSJ

This one rt?

So we need to manually install nVidia drivers

Were you able to run immich? Or do we need to wait for the maintainers to update it.

Yes,I also couldnā€™t run Immich, and the developer told me I needed to wait.

@MG-LSJ

My immich wont seem to run with an intel iGPU. It was happy with nvidia.

I was first running passthrough and had docker running through there for testing. Once i felt ready, I unticked it, but it took 2 restarts for nvidia-smi to start working. I noticed on nightly from last week I had the 560 driver installed, today itā€™s back to the 550.

Plex sees it and transcodes, Ollama works quick once it loads (think cpu is starting to age out), tried ComfyUI which I got working in the VM, but reaching a size limit since my boot drive is so small. Getting the ā€œvar/lib/docker/buildkit/containerdmeta.db no spaceā€ error.

**Just realized this was in regards to apps (app store), not in general. I havenā€™t used apps in a while, jailmaker than started switching over to native docker a few weeks ago here and there. I was referencing isolated GPU passthrough in advanced, for VMā€™s.

1 Like

I struggled getting Electric Eel to recognize my Quadro P400, since it requires legacy drivers. Then, I stumbled upon this post.

After following the directions there, Nvidia passthrough became available as an option when installing Jellyfin, and hardware transcoding is tested and working. Note that you may also have to run this command from the solution linked above in addition to making sure nvidia-smi is up and running:

midclt call -job docker.update ā€˜{ā€œnvidiaā€: true}ā€™

Iā€™m pretty new to pretty much all of this ā€” TrueNAS, PC building, etc. ā€” but if anyone has questions Iā€™ll try to help.

1 Like

Is there a newer post available on how to install the NVIDIA drivers? I canā€™t remember how to enable aptitude in TrueNAS either.

I have an RTX 4060 for Plex transcoding, and I was told in the past NVIDIA passthrough ā€œjust worksā€, but apparently, thatā€™s not the case.

Can I go back to Dragonfish or is it fine to stay on Electric Eel? Do I have to reinstall drivers each time I bump the OS version?

Yes:

:slight_smile:

Small correction, apt is short for Advanced Package Tool.
aptitude was a GUI front-end to apt but is, if I remember correctly, not maintained anymore.

1 Like

I found that Jellyfin would not work with nvidia gpus, even if the ā€œInstall NDIVIA Driversā€ option is checked.

[AVHWDeviceContext @ 0x55a7b2ebbec0] Cannot load libcuda.so.1
[AVHWDeviceContext @ 0x55a7b2ebbec0] Could not dynamically load CUDA

To get it working I had to adapt the capabilities:

deploy:

resources: {ā€œreservationsā€: {ā€œdevicesā€: [{ā€œcapabilitiesā€: [ā€œcomputeā€,ā€œutilityā€,ā€œvideoā€], ā€œdevice_idsā€: [ā€œallā€], ā€œdriverā€: ā€œnvidiaā€}]}}

With that added to the compose file, transcoding is now working with nvidia gpus.

Stupid question but where do you edit the yaml file exactly?

If you mean where is the yaml located, then you can find this within the hidden apps dataset:

/mnt/.ix-apps/app_configs/<>/versions/<>/templates/rendered/docker-compose.yaml

Since any changes are going to be overwitten, I suggest to make a new yaml file and use something like portainer or perhaps directly with the ā€˜custom appā€™ function.

1 Like

Not an expert here :slight_smile:

Tried creating a new stack in Portainer with the default docker-compose from Jellyfin as a base and changed the specific deploy properties.
After shutting down the original Jellyfin docker the custom docker starts normally but when I change the quality settings of a stream to a lower quality I see the CPU usage jump up significantly.
So thatā€™s not workingā€¦

One thing to notice, the original Jellyfin docker can run ā€œnvidia-smiā€ but it never shows a running process.
Same goes for the custom Jellyfin docker.

Will try to create a custom app nowā€¦

FYI: My default docker compose file looks like this:
resources: {"limits": {"cpus": "4", "memory": "8192M"}, "reservations": {"devices": [{"capabilities": ["gpu"], "device_ids": ["GPU-<<long ID>>"], "driver": "nvidia"}]}}

Assuming you have set Nvidia NVENC under transcoding, the best bet to know what Jellyfin is doing would be to look in the transcode logs.
Also, nvidia-smi should be run in the host, I would run the command in the truenas shell to see if the transcode active (the process is jellyfin-ffmpeg).

You may also have to add the Jellyfin user (root) to the video group.

I mean, the app version of Jellyfin not showing a running process with nvidia-smi might be a problem. One thing you could try with your compose is adding an environment variable:

  - NVIDIA_VISIBLE_DEVICES=all

Not sure you need to get that granular with your deploy command. This is whatā€™s working for me:

deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu

I am not sure if this also applies to jellyfin, but I had the exact same issue with the Plex App.

The problem is that the TrueNAS middleware fails to detect the UUID of the nvidia GPU, and so it does not get passed to the container.

The ā€˜fixā€™ is to setup Plex (and mostlikly this applies to jellyfin as well) inside portainer, and then pass the UUID to the container.

See this for how to get the UUID: TrueNAS 24.10-RC.2 is Now Available! - #92 by Chris_Holzer

I took rambro1stbudā€™s suggestion, dumped my old Ollama container and created a new one with the following compose file, being carefully intentional to include the appropriate GPU environment variables and deployment resources:

name: ollama-project
services:

ollama:
container_name: ollama
restart: unless-stopped
image: ollama/ollama:latest
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- ā€œ/mnt/storage/windows_share/Apps/Ollama:/root/.ollamaā€
ports:
- 11434:11434
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
healthcheck:
test: ollama list || exit 1
interval: 10s
timeout: 30s
retries: 5
start_period: 10s
networks:
- ollama_network

ollama-models-pull:
container_name: ollama-models-pull
image: curlimages/curl:latest
command: >
http://ollama:11434/api/pull -d ā€˜{ā€œnameā€:ā€œllama3.1ā€}ā€™
depends_on:
ollama:
condition: service_healthy
networks:
- ollama_network

networks:
ollama_network:
driver: bridge

Sadly, this didnā€™t work, as the poor response of the LLM model demonstrated the GPU resources are not being used at all.

I was able to get the UUID from the Nvidia Tesla P4 installed, with nvidia-smi -L:
GPU 0: Tesla P4 (UUID: GPU-7d073f23-6ec9-13d5-ea9b-52bcebf1f0a9)

But I donā€™t know what to manually do with it. Will this problem be fixed soon? I there a work-around? Or, will the fix show up in the next RC kit?

Thanks!

Tried this in your compose file?

- NVIDIA_VISIBLE_DEVICES=GPU-7d073f23-6ec9-13d5-ea9b-52bcebf1f0a9
- NVIDIA_DRIVER_CAPABILITIES=all
2 Likes

Chris,

Thanks very much for the tip! I have not tried this, but will the first chance I get and will let you and the rest of the thread know.

Thanks again!

-Rodney