Cron says success but script does not run

New migration to Scale 25.04x from Core, Scripts worked fine in Core.

I have two similar scripts that cron says success when I run manually but the scripts don’t actually run. If I use the shell the scripts work perfectly.

All permissions are at 770, Owner and Group, read, write and execute. I have tried running it as, user, truenas_admin and root.

This is all that’s in the scripts.

!/bin/bash

	nohup	rsync -Ppaq --exclude=ZZZ_Cron_Rsync_Users_Backup.log --exclude=Backup_Movies_Music_TV --exclude=AAA-C-USERS-SYNCS-VERSIONS --exclude=AAA-D-GAME_PARTITION-SYNCS-VERSIONS --exclude=AAA-DOWNLOAD-SYNCS-VERSIONS --exclude=AAA-ENCRYPTED-VOLUME-SYNCS-VERSIONS --exclude=AAA-PROGRAM_FILES_-_X86-SYNCS-VERSIONS --delete /mnt/Users/  /mnt/Backup/Backup_Of_Users/ &>/dev/null --log-file=/mnt/Users/Joe/ZZZ_Cron_Rsync_Users_Backup.log  &

exit

Any clues as to why they won’t run.

iX has been hardening TrueNAS and one way they have done so, is to remove execute permissions from certain file systems.

Check the execute permissions on /mnt/Backup by using the following:

zfs get exec Backup

If the result is off, then that is the answer.

Edit: Ops, I see a typo. The first line should be:

#!/bin/bash

Leaving off the pound sign, #, likely causes problems.

Arwen:
Does it make a difference that I have to use sudo to do your command suggestion?

:~$ sudo zfs get exec  /mnt/Backup/Backup_Of_Users.sh
[sudo] password for joe: 
NAME      PROPERTY  VALUE  SOURCE
Backup       exec     on    default

As for the ‘#’ typo it’s just that. Copy and paste error on my part.

Any other ideas?

Great that both the other potential issues are not relevant.

This does not seem like the right syntax. That looks like csh or tcsh syntax not Bash, which would use 2>/dev/null.

Edit: Never mind about that &>/dev/null syntax, I googled it, and it was Bash syntax, just never say it before. It re-directs both StdOut and StdError at the same time.

Ok, for those interested:

I could not get the *.sh scripts to work via Cron all attempts and variations failed. including a clean up by DuckAi.

My theory: Since they worked on Core’s Cron and in Scale’s shell then there is something wrong with how Cron in Scale handles bash scripts, so knowing that I also have a Python script that is launched by Cron I thought I would test by converting them to Python.

Solution: I used DuckAi to convert the *.sh scripts to Python.
All I had to do then was add python in front of the now *.py scripts addresses for Cron and they now work. eg: python /mnt/address_to_my_script.py

I have to vent: Everything, and I mean everything in this migration has been broken by SCALE, Datasets, Permissions, Shares, Scripts, and if not for DuckAi I would have been left with completely broken system scripts and having to go back to Core. Even after days of recreating Pools & Datsets and restoring backups, Assuming I could go back, for I would say Scale would most likely not allow the disk to go back to Core since it broke everything on the way it’s probably not backwards compatible with Core…What a F*** nightmare. Hopefully it’s now over!

This didn’t solve it for me (on Goldeneye). Tried the python route and still no luck. If I click ‘run now’ on the cron job in the UI it works fine, but it just won’t run on a schedule. It claims in the logs to have run successfully, but it doesn’t!

What I was trying to achieve, is the machine to power off at a certain time (it only comes on once a day for a backup replication run). I finally got it to work by adding the job to the crontab file using the UI shell interface.

sudo nano /etc/crontab

then added the following line to power off at 13:35 every day…

35 13 * * * root /usr/sbin/poweroff