Guide to installing Transmission with PIA and port forwarding on TrueNAS 24.10, ElectricEel and later

This is a guide for TrueNAS 24.10+ (ElectricEel and later) to setting up the Transmission bit-torrent client with the VPN provider Private Internet Access, using port forwarding.

Some people have said that the app ‘dockge’ is needed to do things like this, but I haven’t found that to be the case.

Go to Apps. In the upper right click Discover Apps, then the 3-dot menu on the right, then Install via YAML.

Enter an overall name. I used trans-glue, because the apps involved are transmission and gluetun.

It may be best to copy the code below into a text file, then edit as needed. Under Custom Config, paste it. You can also edit after pasting.

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1001
      - PGID=1001
      - VPN_SERVICE_PROVIDER=private internet access
      - OPENVPN_USER=*****
      - OPENVPN_PASSWORD=*****
      - UPDATER_PERIOD=24h
      - PORT_FORWARD_ONLY=true
      - VPN_PORT_FORWARDING=on
      - SERVER_HOSTNAMES=ca-vancouver.privacy.network,mexico.privacy.network,panama.privacy.network,ca-toronto.privacy.network
    ports:
      - 9091:9091/tcp # WebUI Portal: Transmission, probly don't need this, default?
    volumes:
      - /mnt/Ark/Media/Watch/transmission_config/gluetun_config:/gluetun   # External storage for Gluetun config
      - /mnt/Ark/Media/Watch/transmission_config/gluetun_config:/tmp/gluetun  # External storage for forwarded_port file
    restart: unless-stopped

  transmission:
    image: lscr.io/linuxserver/transmission:latest
    container_name: transmission
    network_mode: "service:gluetun"
    environment:
      - TZ=America/Los_Angeles
      - PUID=1001
      - PGID=1001
    volumes:
      - /mnt/Ark/Media/Watch/transmission_config:/config
      - /mnt/Ark/Media/Downloads:/downloads
      - /mnt/Ark/Media/Watch:/watch

Explanation of the code:

gluetun

This container sets up the VPN connection using your preferred provider. The code above sets it up for port forwarding. Edit PUID and PGID to match your user and group IDs on your TrueNAS server. Edit OPENVPN_USER and OPENVPN_PASSWORD with your VPN credentials.

transmission

This container runs the Transmission BitTorrent client. `network_mode: "service:gluetun"` ensures that Transmission's network traffic goes through the Gluetun VPN. Edit PUID and PGID to match your user IDs as for gluetun. The port for web access to the user interface is 9091. In my case, on my local network I go to http://192.168.0.102:9091/transmission/web/ Don't set up ports in the transmission section, let gluetun handle it.

volumes section

For each container, the 'volumes' section allows you to show the contents of an internal container path in a folder outside of the container, so it's easy to access. Create these outside folders first. Each 'volumes' entry is `- [path outside the container]:[path inside the container]` Edit the path left of the colon to match your desired external folder.

Port forwarding

Find the forwarded port in the gluetun log or in the external config's forwarded_port file, then put it in the transmission interface where it says 'Peer listening port' or 'Incoming port', depending on whether you're using the built-in web interface or something like Transmission Remote GUI. It seems that gluetun, when starting, finds the old port record and reuses it if possible. That helps a lot! If the interface shows that the port is open, you're good to go!

Configure transmission

There are lots of settings in Transmission. I think it's easiest to configure most of them in the web interface (hamburger menu goes to some settings directly, then 'Edit preferences' for the rest). They get saved to the file 'settings.json' in your external transmission config folder (see code above) when the container is stopped. But some settings are only in settings.json. To edit that, make sure you stop trans-glue first. Then edit, start it, and the settings should stick.

Verify VPN

To verify that transmission will stop communicating if the VPN drops ('killswitch'), execute on the server `[sudo] docker pause gluetun`. Transmission traffic should slowly grind to a halt. To resume, `[sudo] docker unpause gluetun`. Another thing to check is the IP address shown publicly by the transmission container. Enter the transmission shell and type `curl ifconfig.me`. The response should not be the IP assigned to you by your internet provider; it should be that of the VPN server.
3 Likes

Nice! I think people were using dockge/portainer in sandboxes from Stux’s guide for 24.04 installs. If people want to manage those in EE, they still can. But like you, just pasting yaml into the Scale app UI means 1 less thing I have to manage!

Couple things that I’d add:

Add a depends_on:gluetun in the transmission section.

Link to Gluetun’s project page on different VPN providers:

And a checker to make sure torrents are going through the tunnel:

http://checkmyip.torrentprivacy.com/

And Gluetun can also has HTTP Proxy mode and the Arrs have settings to use it. So you can have your media apps for searching/downloading linux ISOs behind the VPN as well :wink:

1 Like

Those links look quite useful, thanks. I’m not sure how you could use the checkmyip site inside a container. The easiest seems to be curl ifconfig.me, as mentioned near the end of my post.

Regarding depends_on, the gluetun github site says it is unnecessary:

Thanks for this! Have you tried having GlueTUN container on it’s own and then running other containers through it without being in a single YAML file? I am about to try that this weekend.

Welcome @JohnCahill , I’m glad the post was useful. No, I have not tried what you described, since I only have Transmission that needs Gluetun. But I think it’s probably doable. Please post back if you are able to do it successfully.

So far no go, I changed service to container but TrueNAS apparently likes to create it’s own container names such as ix-gluetun-gluetun-1. But when I try and reference that that container is unknown.

I followed your setup with a single YAML, works but now I’m struggling with permissions. Had known when I ran transmission from the catalog.

Just a follow up and trying not to hijack this thread.

  1. you can name a custom app in the YAML code else TrueNAS will create it’s own based on the name you put into the custom app name field. In your code use container_name: gluetun for gluetun.
  2. adding a separate custom app for transmission does not appear to work if you are trying to attach it to the gluetun container using network_mode: container:gluetun in your YAML. I can not explain why because the error log is so vague, however I read the error as if TrueNAS is interpreting this as a additional setting it is not quite sure how to handle
  3. If I use the single stack as the @Glorious1 has provided, it works great but I am having permissions problems with transmission writing to the download directory - despite jumping into the transmission container and able to touch files and make directories. So still working on that.

Are you putting PUID and PGID in both containers? Make sure you’re using your own user and group ID on the server. Also, when you set up the external storage in the “volumes” section, those external directories should already exist and you should own them. Don’t make directories inside the container; you need to make the directories outside the container like a normal TrueNAS directory.

I think I figured it out. The native Transmission used UID and GUID of 0, root. Once I run the single YAML stack with PUID and GUID of 0 for the transmission container it works perfectly.

I used Mullvad VPN with wireguard, it is working great only no port forwarding supported.

Understand on the volume mapping, I have that straight and made sure it is also supported with the Arrs stack.

Hi, I’m new to this forum. This guide helped me so much and was very clear on successfully installing the Transmission client and the VPN (which is working great!). Thank you for your sharings.

1 Like

Hi, I’m running ElectricEel-24.10.1 and I followed your guide, adapting the configuration file to my system, but I get an error message when I click “Save”:

Failed ‘up’ action for ‘trans-glue’ app, please check /var/log/app_lifecycle.log for more details.

Error: Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 509, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 556, in __run_body
rv = await self.middleware.run_in_thread(self.method, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1367, in run_in_thread
return await self.run_in_executor(io_thread_pool_executor, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1364, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.11/concurrent/futures/thread.py”, line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/service/crud_service.py”, line 268, in nf
rv = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 55, in nf
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 183, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/apps/crud.py”, line 185, in do_create
return self.middleware.call_sync(‘app.custom.create’, data, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1665, in call_sync
return methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/apps/custom_app.py”, line 88, in create
raise e from None
File “/usr/lib/python3/dist-packages/middlewared/plugins/apps/custom_app.py”, line 78, in create
compose_action(app_name, version, ‘up’, force_recreate=True, remove_orphans=True)
File “/usr/lib/python3/dist-packages/middlewared/plugins/apps/compose_utils.py”, line 57, in compose_action
raise CallError(
middlewared.service_exception.CallError: [EFAULT] Failed ‘up’ action for ‘trans-glue’ app, please check /var/log/app_lifecycle.log for more details

Do you have any idea what the problem may be?
Thanks

You would need to post that /var/log/app_lifecycle.log like the error message tells you to. Posting your compose would alsobe a good idea, just edit out any credentials/keys!

Thank you! But I’m new to Scale and I don’t know how to access the log or what a compose is. It would be great if you could point me in the right direction.
Edit: I found the log file using the shell, but I cannot open or copy it (permission denied).
Edit2: I figured out what a compose is :slight_smile:
Here’s mine:

services:
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
cap_add:
- NET_ADMIN
environment:
- PUID=3000
- PGID=3000
- VPN_SERVICE_PROVIDER=private internet access
- OPENVPN_USER=
- OPENVPN_PASSWORD=
- UPDATER_PERIOD=24h
- PORT_FORWARD_ONLY=true
- VPN_PORT_FORWARDING=on
- SERVER_REGIONS=italy,belgium
ports:
- 9091:9091/tcp # WebUI Portal: Transmission, probly don’t need this, default?
volumes:
- /mnt/nas/server/transmission/config/gluetun_config:/gluetun # External storage for Gluetun config
- /mnt/nas/server/transmission/config/gluetun_config:/tmp/gluetun # External storage for forwarded_port file
restart: unless-stopped

transmission:
image: lscr.io/linuxserver/transmission:latest
container_name: transmission
network_mode: “service:gluetun”
environment:
- TZ=GMT+1
- PUID=3000
- PGID=3000
volumes:
- /mnt/nas/server/transmission/config/transmission_config:/config
- /mnt/nas/server/transmission:/downloads
- /mnt/nas/server/transmission/watch:/watch

run sudo cat /var/log/app_lifecycle.log

Ok it was just a silly general network problem, due to my inexperience with the Scale user interface. I installed Scale today for the first time, I come from freenas but I decided to take the plunge. Everything works flawlessly now, it’s a much simpler solution to what I was used to. In freenas, I used OpenVPN and a series of scripts to get transmission to work with PIA. One advantage, however, is that it was possible to automatically send the forwarded port to transmission:

transmission-remote --auth "${transUser}":"${transPass}" -p $port

Since the port appears to be written by gluetun to a file, I wonder if such automation would be possible in Scale.

When I was using them as a single compose stack, this is what I was using:

services:
  gluetun:
    cap_add:
      - NET_ADMIN
    container_name: gluetun
    environment:
      - VPN_SERVICE_PROVIDER=
      - OPENVPN_USER=
      - OPENVPN_PASSWORD=
      - SERVER_COUNTRIES=
    image: qmcgaw/gluetun:latest
    devices:
      - /dev/net/tun:/dev/net/tun
    ports:
      - 8888:8888/tcp
      - 8388:8388/tcp
      - 8388:8388/udp
      - 10095:10095
      - 6881:6881
      - 6881:6881/udp
    pull_policy: always
    restart: unless-stopped
    volumes:
      - /mnt/Array1/configbackups/gluetun:/gluetun
  qbittorrent:
    container_name: qbittorrent
    depends_on:
      - gluetun
    environment:
      - PUID=568
      - PGID=568
      - TZ=
      - WEBUI_PORT=10095
      - TORRENTING_PORT=6881
      - UMASK=022
    image: lscr.io/linuxserver/qbittorrent:latest
    network_mode: service:gluetun
    restart: unless-stopped
    volumes:
      - /mnt/Array1/configbackups/qbittorrent:/config
      - /mnt/Array1/downloads/torrents:/mnt/Array1/downloads/torrents

You’ll need the webui and tcp/udp ports for transmission in your gluetun part of the compose. In my case I’m using 10095 for qbittorrent webui and 6881 for torrents.

Thank you for your reply. In my case the port number is dynamic, because it is assigned by the VPN provider at connection. gluetun writes this number to a file, and I know how to retrieve it and pass it to transmission with a script:

#!/bin/bash
port="$( cat /downloads/config/gluetun_config/forwarded_port )"
transmission-remote -p $port

I tried to run this script (test.sh) at startup setting an entrypoint in the compose:

  transmission:
    image: lscr.io/linuxserver/transmission:latest
    container_name: transmission
    entrypoint: ["/downloads/config/test.sh"]

However the docker crashes. The script works if executed manually from the container shell. As I understand, this is standard behavior for containers, because the script ends without further instructions. Is there a way to run a script at startup in a docker without killing it?

You might try this instead:

entrypoint: ["/bin/bash", "-c", "/downloads/config/test.sh && sleep infinity"]

Thanks, I tried your suggestion but the app crashes instantly. Maybe I should follow an alternative route: transmission is supposed to have a built-in script mode (scripts to be run at startup, when download is finished, etc), but so far I haven’t managed to make it work.

Just to be clear, @goerz , the path to your script must be the path inside the transmission container, and of course the script has to be there. In your volumes directive, I don’t see that you configured an external directory to link with /downloads/config inside the container. If you go into the container’s shell, can you see the script there?