I wanted to share a working pattern for running a mixed-hardware Tdarr cluster coordinated entirely from a TrueNAS SCALE host using the SCALE app.
Environment
-
TrueNAS SCALE as the control plane (Tdarr Server + Node via app)
-
Mixed nodes:
-
Intel iGPU (Linux / Windows) using QSV
-
NVIDIA RTX 4070 using NVENC
-
macOS nodes using VideoToolbox
-
-
Media storage hosted on TrueNAS (ZFS datasets)
Problem
In a mixed environment, Tdarr will happily try to run the wrong encoder on the wrong node unless you explicitly gate it. This caused:
-
NVENC being attempted on macOS
-
QSV logic behaving inconsistently across platforms
-
Failed jobs and wasted cycles
Solution
Use Check Node Hardware Encoder as a first-class decision point in the flow and branch explicitly by encoder capability:
-
hevc_qsv→ Boosh QSV plugin -
hevc_nvenc→ DOOM NVENC plugin -
hevc_videotoolbox→ Boosh plugin (macOS automatically falls back to VT)
The key insight is that the Boosh QSV plugin:
-
Uses QSV on Linux/Windows
-
Transparently uses VideoToolbox on macOS
-
Ignores encoder selection on macOS by design (which is correct behavior)
This lets a single flow safely route work across heterogeneous nodes without invalid encoder usage.
Results
-
Clean branching across all nodes
-
No invalid encoder attempts
-
NVENC on a 4070 handles multiple concurrent jobs effortlessly
-
macOS nodes quietly contribute without special-casing
-
Tdarr coordinated entirely from TrueNAS SCALE without external orchestration
Artifacts
I saved the full flow JSON and a README here for anyone who wants to reuse or adapt it:
https://gist.github.com/Rocketplanner83
This has been running reliably and is scaling better than expected. Posting in case it helps anyone else running Tdarr under SCALE with mixed hardware