Electric Eel Plex and Nvidia

Moved to the RC today and overall so far so good, with one hicup. my GPU while seen and assignable, is not used for hardware transcode since the move to RC.
ix plex app was cleanly migrated to docker and plex starts up just as expected. I then installed the nvidia driver and edited the plex app to select my NVidia T600. restarted the app and even restarted the NAS. I confirmed that plex config was still set to use hw assisted transcode.

Any thoughts on how to start troubleshooting? IMO the fact that post driver install it was selectable inside the Plex app config, tells me the driver was properly installed and available… I also selected the plex pass image.

I’m experiencing the same issue. I tried unchecking and rechecking the “Install NVIDIA driver” setting, and the system successfully installed the driver. In the system shell, running nvidia-smi gives a proper response, and the Plex app recognizes my NVIDIA A2000, allowing me to select it. However, transcoding doesn’t work, and the GPU doesn’t appear in the transcoder settings. When I access the Plex container shell and run nvidia-smi, I get an error saying the command is not found. It was working great in the Beta build.

Any help or advice is appreciated.

Have you considered installing Dockge and running Plex in a docker container? Here’s a compose.yml you can build off of.

services:
  plex:
    image: plexinc/pms-docker:beta
    container_name: plex
    network_mode: host
    environment:
      - PUID=568
      - PGID=568
      - TZ=$YOUR_TZ
      - VERSION=docker
      - PLEX_CLAIM= $YOUR_CLAIM_TOKEN
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
    volumes:
      - /mnt/pool/path/to/plex:/config
      - /mnt/pool/path/to/TV:/tv
      - /mnt/pool/path/to/Movies:/movies
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
    restart: unless-stopped
networks: {}

I know that’s a bandaid solution, but I’m not sure what’s wrong with your setup. I’ve tested the app versions of Plex and Jellyfin, and checking the box for my NVIDIA card (Quadro P400) during install results in transcoding working just fine.

1 Like

Got the same problem. Have anyone figured out how to enable hw transcoding without tranfering Plex to Dockge?

I have an i3-N305 mini-PC running OMV on Debian that I have the vast majority of my containerized apps running on w docker compose, so I am familiar with the approach, I just like to have a few on my TrueNAS system as I have so much extra CPU and the Nvidia card it was a natural fit for Plex. I even have compose files that I used before, but frankly not assigning the video in quite the same way, so thank you for that, I appreciate it.

I might first try just creating another instance of plex alongside this one, and if it can hw transcode, maybe I just copy the plex db from one to the other …

I will also have to lookup Dockge and see why that is required as I thought scale EE included all the docker support by default.

Thank you for the feedback.

I also can run nvidia-smi in the shell and it correctly sees my card,

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120                Driver Version: 550.120        CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA T600                    Off |   00000000:65:00.0 Off |                  N/A |
| 44%   48C    P0             N/A /   41W |       1MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

I created a new plex install and it indeed uses the nvidia card, so it appears to be a migration problem. I will try to equalize the settings between the fresh and the migrated to see if I can determine the issue.

I didn’t find a setting that made the difference. Since a new instance of plex worked correctly, I just decided to trash the app and replace it. I deleted the plex app, re-created it with the same setting exactly, and it now uses the hw correctly in transcode. Since I use host mapping I didn’t lose anything.
Solved

2 Likes

Plex on Dockge with:

  1. Transcoding
  2. Ramdisk
  3. Tautulli

I prefer the linux server images as they have consistent images of nearly all apps needed in a home lab

networks:
  media:
    name: media
services:
  plex:
    container_name: plex
    image: lscr.io/linuxserver/plex:1.41.0
    hostname: plex
    ports:
      - 32400:32400
    restart: unless-stopped
    runtime: nvidia
    environment:
      - PUID=568
      - PGID=4000
      - TZ=America/Chicago
      - VERSION=docker
      - NVIDIA_VISIBLE_DEVICES=all
    networks:
      - media
    volumes:
      - /pathonhost:/config
      - /pathonhost:/mnt/storage
      - type: tmpfs
        target: /tmptranscode
        tmpfs:
          size: 10000000000
  tautulli:
    container_name: tautulli
    image: lscr.io/linuxserver/tautulli:2.14.5
    hostname: tautulli
    ports:
      - 8181:8181
    restart: unless-stopped
    environment:
      - PUID=568
      - PGID=568
      - TZ=America/Chicago
    networks:
      - media
    volumes:
      - /pathonhost:/config

I did what you suggested and reinstalled fresh plex app - it worked, transcoding worked.

But after plex new update to newest version, problem came back.

Has anyone encountered similar problem?

Following this since I have the same issue with jellyfin. My migrated instance can’t seem to use the gpu, but if I install a new instance of jellyfin transcoding works perfectly.

1 Like

I tried re-doing my plex install to a fresh (using host path) and while it see’s the video card, according to Tautulli and Truenas Cpu usage, it is still using the cpu to transcode and not my video card.

Plex shows the video card correctly, but the Truenas Plex edit config shows the video card as a non nvidia card still

GPU Configuration

Passthrough available (non-NVIDIA) GPUs

help_outline

Select NVIDIA GPU(s)

help_outline

Unknown

help_outline

Use this GPU

help_outline

Plex shows GP104GL [Tesla P4]

I wonder if Truenas installing Nvidia drivers installs a video driver too new so it doesn’t detect the video card because it is old.

This all worked just fine before upgrading to Electric Eel RC 2

Also on my old install. When i tried up update I got this error code.
[EFAULT] Failed to render compose templates: base_v1_1_4.utils.TemplateException: Expected [uuid] to be set for GPU inslot [0000:85:00.0] in [nvidia_gpu_selection]

Installing the Nvidia driver in E EEL is now a separate prerequisite step. Make sure you don’t skip that. New drivers do drop old cards so that may be an issue too and you can confirm the version installed with nvidia-smi and google to see if your card is supported.

I had not seen it in RC2 gui on haow to install NVIDIA drivers … I thought external package install was against the TrueNAS philosophy.

1 Like

Ok, so weird thing. I remote in to have a look today. a friend is watching something, and low and behold it is hardware transcoding. I checked with nvidia-smi as well and it is showing it is working with plex.

Interesting as it did not work a day or so ago, but now is working…?

I moved to RC2 and had to perform a similar dance to make it work.
-Updated to RC2 and plex “works” but no Nvidia support (hw support stopped working and no Nvidia selected in app)
-In an attempt to solve the issue, I added the Nvidia drivers again, still no go
I tried to add Nvidia card to app and it failed with

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 488, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 535, in __run_body
    rv = await self.middleware.run_in_thread(self.method, *args)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1363, in run_in_thread
    return await self.run_in_executor(io_thread_pool_executor, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1360, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 268, in nf
    rv = func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 55, in nf
    res = f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/crud.py", line 273, in do_update
    app = self.update_internal(job, app, data, trigger_compose=app['state'] != 'STOPPED')
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/crud.py", line 303, in update_internal
    update_app_config(app_name, app['version'], new_values, custom_app=app['custom_app'])
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/ix_apps/lifecycle.py", line 59, in update_app_config
    render_compose_templates(
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps/ix_apps/lifecycle.py", line 50, in render_compose_templates
    raise CallError(f'Failed to render compose templates: {cp.stderr}')
middlewared.service_exception.CallError: [EFAULT] Failed to render compose templates: base_v1_1_4.utils.TemplateException: Expected [uuid] to be set for GPU inslot [0000:65:00.0] in [nvidia_gpu_selection]

-I tried to remove Nvidia drivers (so I could reinstall them) so I removed checkbox w/o error, but no then had option to reinstall them (despite a reboot)
(this didn’t actually remove the drivers … they are still there confirmed with nvidia-smi at the console.
-I rebuilt a new instance of the PLEX app with same settings (except I left the user as 568) - works OK inc. Nvidia
-I changed user to my plex id (3010) works inc. Nvidia