Before updating to 25.10-RC1, I had a reliable setup where an rclone script synced data from a remote TrueNAS system over NFS. It mounted at boot, ran on a simple cron schedule, and had been rock-solid for months.
After the RC1 update, the workflow silently failed because systemd no longer handled the NFS mount or environment variables the same way. The automount script still executed, but the mount point wasn’t consistently ready by the time rclone launched.
Rather than fight the race condition, I embedded the NFS mount directly into my Nextcloud Docker stack. The container now mounts the remote dataset internally, which keeps everything isolated and predictable.
Here’s the relevant section of the Compose stack:
services:
nextcloud:
image: lscr.io/linuxserver/nextcloud:latest
container_name: nextcloud
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
volumes:
- /mnt/vega/appdata/nextcloud/appData:/config
- /mnt/dagda/users/userData:/data
- /mnt/dagda/users:/external
- type: volume
source: nfs_mac_ingest
target: /mnt/mac_ingest
ports:
- 8887:443
restart: unless-stopped
volumes:
nfs_mac_ingest:
driver_opts:
type: "nfs"
o: "addr=192.168.1.7,nfsvers=4,rw"
device: ":/mnt/turul/mac_ingest"
Now I can execute the transfer directly inside the container:
docker exec -it nextcloud rclone copy /mnt/mac_ingest/ /data/import/ --progress
The result: clean, container-native transfers around 2 GB/s without worrying about boot-order dependencies or systemd timing quirks.
Observation: This feels like a subtle systemd regression or timing change introduced in RC1—mounts that were previously reliable now seem to occur too late for automated jobs. It’s not a deal-breaker, but worth noting for anyone who relied on pre-mount scripting or cron automation.
Environment:
- TrueNAS SCALE 25.10-RC1 (Goldeye)
- Docker Compose managed directly via terminal, edited using Code Server
- NFSv4 link between two TrueNAS systems over 2.5 GbE
Posting here in case other users or devs have run into similar timing issues since RC1.