Jellyfin Server Config Reset on Updates (TrueNAS Jellyfin App)

Sounds like a good time to encourage people to setup Tiered Snapshots before you need them :wink:

BTW, FWIW, i wouldn’t use tmpfs for transcodes.

Oh, why would you recommend against using /tmpfs in the way I’ve initialised it for transcoding ? I thought with an excess of free RAM (20gb for me) that can be exclusively dedicated to jellyfin, it would speed up transcodes significantly as opposed to using the SSD?

I suppose there’d be more space on the SSD to use, but then I suppose it comes down to a smaller but much faster RAM buffer size that’s usable for (simultaneous) transcodes vs. having a larger buffer (on SSD?) with slower r/w speeds.

Definitely could be missing something here though! :slight_smile:

Trandcodes can be quite large. If you have multiple users…

And the bottleneck should be the transcoding computation, not async writing to ssd.

Very fair. I did get stuck transcoding even with one user last night (due to a bitrate mismatch fixed by re-setting the quality to “Auto”… It was set to pulling below the desired bitrate and thus converting on the fly?), with I think 10-20GB of transcoding tmpfs storage… which I thought would be enough for 1 user but CPU was whirring at 50% (I think it capped itself somehow) and that may have been the actual bottleneck.

How would you alternatively recommend setting transcoding storage up within my aforementioned docker compose file (for completeness)? Creating an additional “transcodes” dataset within junifin and then setting it as transcodes directory in compose before re-deploying/updating the container?

May be a somewhat dumb question, but on future updates/redeployments like these - my new config settings won’t be overwritten, but just the image/component of the compose file updated, right?

Sorry for all the questions! Really appreciate your time :smiling_face:

Iirc, jellyfin uses half the cores assigned to it. I give it all the cores.

How would you alternatively recommend setting transcoding storage up within my aforementioned docker compose file (for completeness)? Creating an additional “transcodes” dataset within junifin and then setting it as transcodes directory in compose before re-deploying/updating the container?

Yes. Just don’t bother snapshotting it.

May be a somewhat dumb question, but on future updates/redeployments like these - my new config settings won’t be overwritten, but just the image/component of the compose file updated, right?

Right.

Turns out I hadn’t setup the transcode directory properly, see my updated docker compose file below. Note I also had to update the Transcode Path to /transcode within the jellyfin UI to ensure it used it, and the RAM.

The TLDR;

  • Fixed transcode path by mapping /transcode to tmpfs (18GB RAM) and setting it in Jellyfin UI.
  • Confirmed RAM-based transcoding works via logs, df -h, and Glances.
  • Using cpus: 12 and mem_limit: 20192m to allocate all 12 threads and 20GB RAM.
  • Transcodes auto-delete on playback stop → looking for a way to retain them for a period of time??
  • CPU temps are high despite low reported usage. Jellyfin reports 100% of allocated vCPUs, but TrueNAS system-wide shows ~10% actual CPU usage.

fixed, tmpfs-based setup:

services:
  jellyfin:
    cpus: 12
    devices:
      - /dev/dri:/dev/dri
    environment:
      JELLYFIN_FFMPEG_OPTIONS: '-transcodepath /transcode'
      PGID: '568'
      PUID: '568'
      TZ: Australia/Adelaide
      UMASK: '002'
    image: lscr.io/linuxserver/jellyfin:latest
    mem_limit: 20192m
    ports:
      - '8096:8096'
    restart: unless-stopped
    tmpfs:
      - /transcode:rw,size=18012m
    volumes:
      - /mnt/rei/configs/juni-fin/config:/config
      - /mnt/rei/configs/juni-fin/cache:/cache
      - /mnt/tank/data:/data

Testing whether tmpfs was writing to RAM:

Monitoring (via Glances)


(^^ after about 5min of playback transcoding)


(^^ after about 10min of playback transcoding, at which point the job had finished & CPU was back to 0% with the video stored in RAM)

Video Being Played:

Tested by playing a transcoded stream:

  • RAM usage was up, all with CPU at 0%
  • running df -h /transcode inside the container yielded a steadily-filling folder
  • checked ffmpeg logs and was indeed writing to /transcode
  • glances revealed RAM was steadily being used for

Whilst I can see the value in storing transcodes in RAM—seems like I can fit quite a few movies in tmpfs before it fills up, especially since they’re purged once playback ends—that said, CPU performance and network speeds are likely the real bottlenecks, and RAM storage doesn’t help much for either of that. Also, as transcodes are deleted as soon as the player is closed, meaning playback restarts the transcoding process. → Is there a setting to retain transcodes for a while?

However…

The bit I’m concerned about now is how high my CPU temps are compared to the reported usage of the actual device itself - displayed as 100% allocated CPU usage (see above docker-compose) of Jellyfin’s allocated resources (12vCPUs), but only 10% total CPU usage in TrueNAS… (CPU has 6c12th).

TrueNAS CPU Usage & Temps


Glances/in-container usage:

Cooling is admittedly not stellar (just stock CPU cooler & thermal case in a somewhat SFF case (Johnsbo N4, which I regret…), but am unsure why even after passing 12 threads it’s not (effectively?) utilising more of the CPU’s total power here…

Apologies for the long post! Trying to document all the findings for any other lost souls :).

You need the dockermod for intel

DOCKER_MODS=linuxserver/mods:jellyfin-opencl-intel

Interesting… I thought that mod was just for tone mapping, which isn’t enabled nor needed as I’m not converting between HDR/SDR, just wanting to use the transcoding feature (Tone-mapping can transform the dynamic range of a video from HDR to SDR while maintaining image details and colors, which are very important information for representing the original scene. Currently works only with 10bit HDR10, HLG and DoVi videos. This requires the corresponding GPGPU runtime.). But once again, I may be missing something here.

Regardless, i’ve installed it anyway, verified its working via the logs and tests in the TrueNAS CLI in the docs… but the behaviour hasn’t changed. Jellyfin still reports 100% of allocated vCPUs usage, but TrueNAS system-wide shows ~10% actual CPU usage, despite having allocated all 12 threads to Jellyfin in docker-compose and setting Transocde Thread Count in Jellyfin WebUI to MAX.

services:
  jellyfin:
    cpus: 12
    devices:
      - /dev/dri:/dev/dri
    environment:
      - JELLYFIN_FFMPEG_OPTIONS=-transcodepath /transcode
      - PGID=568
      - PUID=568
      - TZ=Australia/Adelaide
      - UMASK=002
      - DOCKER_MODS=linuxserver/mods:jellyfin-opencl-intel
    image: lscr.io/linuxserver/jellyfin:latest
    mem_limit: 20192m
    ports:
      - '8096:8096'
    restart: unless-stopped
    tmpfs:
      - /transcode:rw,size=18012m
    volumes:
      - /mnt/rei/configs/juni-fin/config:/config
      - /mnt/rei/configs/juni-fin/cache:/cache
      - /mnt/tank/data:/data

Hmmmmmm… :rofl: