Yes, it’s all there, the external directories are configured correctly. I can run the script from the container shell.
After struggling for days, I figured out a way to do this. linuxserver.io containers have two methods for custom scripts launched around initialization. The first is custom scripts placed in /custom-cont-init.d
. Most of my struggle was discovering that, depending on how you write the script, it will either finish before Transmission is ready (often causing crash), or, if it waits until Transmssion is ready, it will wait forever, because Transmission won’t fully launch until the script exits.
The other is custom services, placed in /custom-services.d
. The big advantage for this purpose is that custom services are launched after all built-in services are up and running. The disadvantage is that the s6 system will keep relaunching any custom service that exits. So the script keeps repeating forever. I worked around this by putting sleep infinity
at the end of the script after successful completion, so it never exits, and is not relaunched ad infinitum. Here are the details:
- Place the script in a folder in a dataset on the TrueNAS host that is not shared via SMB. This is because the folder and script must be owned by root. I found in an SMB share, ownership kept reverting to my user. The path I used is
/mnt/Ark/Unshared/custom-services.d/send-port.sh
. Unshared is the dataset andcustom-services.d
is the folder that will be mounted in the container. - In the YAML, add the following volume directive
- /mnt/Ark/Unshared/custom-services.d:/custom-services.d:ro
. This will mount your folder as the folder of the same name in the container. I can’t remember if I had to create/custom-services.d
in the container first. - In editing the paths in the script and the other volume directives, you need to figure where gluetun’s
forwarded_port
file will appear, and where you want the script’s log to appear. My setup on the host is
/mnt/Ark/Media/Watch/
____send-port.txt (the script log)
____gluetun-config/
________forwarded_port, etc.
The script is more elaborate than needed; a lot of the waits and checks were added during troubleshooting. It and the full YAML follow.
#!/bin/bash
# This script sends the forwarded VPN port from Gluetun
# to Transmission so it automatically gets used.
# Via SSH, put it in /mnt/Ark/Unshared/custom-services.d,
# which is mounted in transmission container as /custom-services.d
# That directory and this script should be owned by root.
# Improved to catch script errors.
set -euo pipefail # Safer script execution
# Paths
FORWARDED_PORT_FILE="/config/gluetun_config/forwarded_port"
LOG="/config/send-port.txt"
# Append all output to the log file
exec >> "$LOG" 2>&1
echo -e "\n$(date) Starting a new run of send-port.sh"
until pidof transmission-daemon > /dev/null; do
echo "$(date) Waiting for Transmission process..."
sleep 5
done
echo "Transmission is running."
until /usr/bin/curl -s http://localhost:9091/transmission/rpc | /bin/grep -q 'session-id'; do
echo "$(date) Waiting for Transmission RPC..."
sleep 5
done
echo "Transmission RPC is ready."
# Wait for the forwarded port file to appear
while [[ ! -f "$FORWARDED_PORT_FILE" ]]; do
echo "$(date) Waiting for Gluetun to generate the forwarded port..."
sleep 5
done
# Read the forwarded port
FORWARDED_PORT=$(<"$FORWARDED_PORT_FILE")
echo "Forwarded port retrieved: $FORWARDED_PORT"
# Attempt to update Transmission's port
OUTPUT=$(/usr/bin/transmission-remote -p "$FORWARDED_PORT" 2>&1)
EXIT_CODE=$?
if (( EXIT_CODE == 0 )); then
echo "Transmission port successfully updated to $FORWARDED_PORT"
else
echo "$(date) Failed to update Transmission port. Response:"
echo "$OUTPUT"
exit 1
fi
# Log completion and enter big sleep so container doesn't restart me
echo "$(date): send_port.sh completed. Going to sleep indefinitely now."
sleep infinity
services:
gluetun:
build:
context: /mnt/Ark/Media/Watch/transmission_config/gluetun_config
cap_add:
- NET_ADMIN
container_name: gluetun
environment:
- PUID=1001
- PGID=1001
- VPN_SERVICE_PROVIDER=private internet access
- OPENVPN_USER=*
- OPENVPN_PASSWORD=*
- UPDATER_PERIOD=24h
- PORT_FORWARD_ONLY=true
- VPN_PORT_FORWARDING=on
- >-
SERVER_HOSTNAMES=ca-vancouver.privacy.network,mexico.privacy.network,panama.privacy.network,ca-toronto.privacy.network
- TZ=America/Los_Angeles
image: qmcgaw/gluetun:latest
ports:
- '9091:9091'
restart: unless-stopped
volumes:
- /mnt/Ark/Media/Watch/transmission_config/gluetun_config:/gluetun
- /mnt/Ark/Media/Watch/transmission_config/gluetun_config:/tmp/gluetun
- /etc/localtime:/etc/localtime:ro
transmission:
container_name: transmission
environment:
- PUID=1001
- PGID=1001
image: lscr.io/linuxserver/transmission:latest
network_mode: service:gluetun
volumes:
- /mnt/Ark/Media/Watch/transmission_config:/config
- /mnt/Ark/Media/Downloads:/downloads
- /mnt/Ark/Media/Watch:/watch
- /mnt/Ark/Unshared/custom-services.d:/custom-services.d:rw
Thank you for the amazing work! I followed your instructions and I created the unshared dataset with the script. I can navigate to the script from the container shell, so it is mounted correctly, I think. However, the script doesn’t run and I get this warning in the log:
2025-01-30 17:11:22.269191+00:00s6-supervise custom-svc-send-port.sh: warning: unable to spawn ./run (waiting 60 seconds): No such file or directory
Any ideas on what it means?
Thanks
EDIT: for some reason, the script can’t find the forwarded_port file, although the path is correct. I tried running my very simple script as a service in custom-services.d and it works! I just added a 60 sec sleep at the beginning to be sure that transmission is loaded and that the port file is written by gluetun.
#!/bin/bash
sleep 60
port="$( cat /downloads/config/gluetun_config/forwarded_port )"
echo $port
transmission-remote -p $port
sleep infinity
It’s a crude solution, but until I find out why I can’t run your script it’s going to suffice.
Glad you got it to work in the end. No idea what the warning from the original script means. Was the script named custom-svc-send-port.sh
, located directly in /custom-services.d
?
Let me know if the solution still works after rebooting the server. I like to delete forwarded_port
and piaportforward.json
before testing so I can see if the web interface gets a new port.
I didn’t change the name of the script, it was always send-port.sh
, located in /custom-services.d
.
Eventually I managed to run a modified version of your script and it works very well. It survives reboot and, when deleting piaportforward.json
and forwarded_port
, it detects the new port correctly and sends it to transmission. However, I had to add an extra 60 seconds of sleep time before sending the port to transmission because otherwise the command would have no effect. Maybe my system is slow…
Here is the script that works for me:
#!/bin/bash
LOG="/downloads/config/send-port.txt" #change according to your needs
# Append all output to the log file
exec >> "$LOG" 2>&1
echo -e "\n$(date) Starting a new run of send-port.sh"
until pidof transmission-daemon > /dev/null; do
echo "$(date) Waiting for Transmission process..."
sleep 5
done
echo "Transmission is running."
until /usr/bin/curl -s http://localhost:9091/transmission/rpc | /bin/grep -q 'session-id'; do
echo "$(date) Waiting for Transmission RPC..."
sleep 5
done
echo "Transmission RPC is ready."
# Wait for the forwarded port file to appear
FORWARDED_PORT_FILE="/downloads/config/gluetun_config/forwarded_port" #change according to your needs
while [[ ! -f "$FORWARDED_PORT_FILE" ]]; do
echo "$(date) Waiting for gluetun to generate the forwarded port..."
sleep 5
done
echo "$(date) Waiting for 60 seconds..."
sleep 60
port="$( cat /downloads/config/gluetun_config/forwarded_port )"
echo "Forwarded port retrieved: $port"
transmission-remote -p $port
# Log completion and enter big sleep so container doesn't restart me
echo "$(date): send_port.sh completed. Going to sleep indefinitely now."
sleep infinity
And here is my compose:
services:
gluetun:
build:
context: /mnt/nas/server/transmission/config/gluetun_config
image: qmcgaw/gluetun:latest
container_name: gluetun
cap_add:
- NET_ADMIN
environment:
- PUID=3000
- PGID=3000
- VPN_SERVICE_PROVIDER=private internet access
- OPENVPN_USER=***
- OPENVPN_PASSWORD=***
- UPDATER_PERIOD=24h
- PORT_FORWARD_ONLY=true
- VPN_PORT_FORWARDING=on
- SERVER_HOSTNAMES=de-frankfurt.privacy.network,italy.privacy.network,swiss.privacy.network
ports:
- 9091:9091/tcp # WebUI Portal: Transmission, probly don't need this, default?
volumes:
- /mnt/nas/server/transmission/config/gluetun_config:/gluetun # External storage for Gluetun config
- /mnt/nas/server/transmission/config/gluetun_config:/tmp/gluetun # External storage for forwarded_port file
restart: unless-stopped
transmission:
image: lscr.io/linuxserver/transmission:latest
container_name: transmission
network_mode: "service:gluetun"
environment:
- TZ=GMT+1
- PUID=3000
- PGID=3000
volumes:
- /mnt/nas/server/transmission/config/transmission_config:/config
- /mnt/nas/server/transmission:/downloads
- /mnt/nas/server/transmission/watch:/watch
- /mnt/nas/unshared/custom-services.d:/custom-services.d:ro
Thank you, you’ve been of tremendous help.
That’s weird. I wonder why the log was complaining about custom-svc-send-port.sh
.
Hi, sorry I know this thread is starting to get a bit old now but I followed it and a little of what @Craig_L was doing with qbittorrent as I am more familiar with that interface.
I seem to have gotten everything and was able to download some linux iso’s once I got the file directories and permissions all worked out. But the only part I wasn’t able to follow was the testing of the killswitch. when I check the curl ifconfig.me it shows that qbit has a different ip then the rest of the machine but I wasn’t able to run the pause commands.
truenas_admin@truenas[~]$ [sudo] docker pause gluetun
zsh: no matches found: [sudo]
truenas_admin@truenas[~]$ docker pause gluetun
permission denied while trying to connect to the Docker daemon socket
Which I believe dumbs down to I don’t have the permissions to connect to the docker? and the sudo command just did nothing.
Don’t use the around sudo. Sudo is a command that lets you run the following command as root.
Okay gave that a go and it just complains that the command doesn’t exist
@radiationvictom, you should paste the actual command you’re using, including the prompt, and the output from it so people can be sure exactly what you’re doing. Use the preformatted text tags. Makes it easier to help you.
sudo docker pause gluetun
or just sudo to bash then you can just run whatever commands you want as root
sudo bash
Thanks @Craig_L @Glorious1 I figured out my tired half dead brain misspelled pause so yes that command didn’t exist. Sorry about that. It’s all working now pausing gluetun stops qbit from downloading or uploading so all working as intended thank you very much for your respective guides and container/docker scripts.
Hello all, I was wondering if a combination of what @Craig_L and @Glorious1 did can be adapted, I’m totally new to this environment having spent all my time in a windows setting, and I’m bashing my head against the keyboard trying to mix and match guides to achieve my goal.
I’m trying to setup qbittorrent with pia integration, with automatic port forwarding. Ideally with wire guard but I can live with openvpn.
I was successfully able to get everything working with transmission using @Glorious1’s guide (minus auto port fowarding) however the UI of the online page is severely lacking in options/settings compared to the desktop version, so I wanted to try qbittorrent instead.
Any help is greatly appreciated, thanks!
You’d have to look into PIA & Gluetun support to see if/how it can be done.
I haven’t read the thread, so maybe not helpful, but I’m successfully running the following container to get PIA and Transmissions with port forwarding. Just make sure you use a region/pop that supports port forwarding.
haugene/transmission-openvpn
Sorry, im a door knob, didnt notice TZ was timezone lol.
For some reason the code you shared isnt installing … i tried tweaking it a little to no avail.
Im comparing your code, to @Glorious1 and im not understanding the differences.