SOLVED - HDD Spin Down Script - TrueNAS SCALE 25.10

After struggling with disk power management in TrueNAS SCALE 25.10, I’ve made a small script to reliably spin down HDD after inactivity.

Here’s what I’ve previously found:

  • The UI sets Spin down after X minutes and middlewared calls hdparm -S ….
  • Disks support SATA standby (-S 24 or -y work manually, but somehow fail for values > ~ 3 minutes).
  • After boot, disks never enters standby automatically, even without any I/O.

So I created a script that:

  1. Monitors disk activity via /sys/block/<dev>/stat.
  2. Tracks per-disk IO counters in temporary files.
  3. Calculates idle time using the modification timestamp of those counters.
  4. Spins down a group only if all disks in the group are idle longer than the threshold.
  5. Handles non-existent disks gracefully.
  6. Supports runtime parameters for threshold and disk groups.

Example usage in cron:

*/5 * * * * /root/scripts/hdd_spin_down.sh 300 "sda sdc sdd;sdb"
  • 300 = threshold in seconds
  • "sda sdc sdd;sdb" = two groups of disks, assuming sda, sdc, sdd are in the same pool, and sdb is another pool.

Script: https://github.com/DarthJahus/TrueNAS-Scripts/blob/master/hdd_spin_down.sh

Usage: hdd_spin_down.sh [threshold|<threshold [disks groups]>

Examples:

hdd_spin_down.sh 7200

hdd_spin_down.sh 1200 "sdx sdy sdz; sdi sdj"

hdd_spin_down.sh 3600 "ata-HITACHI_HUS724030ALA640_P8KBVXSY ata-HGST_HUS724030ALA640_PN1234P9H9XVGX wwn-0x5000cca22cef3bd5"

hdd_spin_down.sh 1800 "sda ata-HGST_HUS724030ALA640_PN1234P9H9XVGX wwn-0x5000cca22cef3bd5;sdb"

If you don’t want to pass parameters (either disks or both disks and theshold), then modify the default values in the script.

Notice that you can mix notations, but I clearly discourage against this usage.

3 Likes

Very nice, thank-you.

Would you consider adding support for pool names, perhaps by getting a list of partition UUIDs from ‘zpool status’ then creating a disk group by looking up those UUIDs in /dev/disk/by-partuuid?

That would allow a more simple use of…
hdd_spin_down 600 ‘pool-usb-backups; pool-cold-storage’
…without needing to worry about finding the identifiers or disks having a different /dev/sdX over time.

You are brave to pick up an uphill fight against the OS. :military_medal:

The script is not necessary for CORE, and would not work since hdparm is a Linux thing, so I removed the tag.

4 Likes

@mt104, many have pointed out that block device names (/dev/sd*) can change after a reboot.

As an alternative, you can use the by-id path (/dev/disk/by-id/*), for example:

hdd_spin_down.sh 1800 "ata-HGST_HUS724030ALA640_PN1234P9H9XVGX wwn-0x5000cca22cef3bd5"

Regarding getting the UUID or by-id from zpool status, I’ll definitely look into it. Thank you for the suggestion.


Thank you, @etorix.
Indeed, this is for SCALE, and yes, the subject is clearly controversial :smiley:

As for CORE, using the WebUI settings works correctly (disable S.M.A.R.T., set APM to 1 and define a sleep timer).

1 Like

Does this script suffer the same as the other spindown script in that they do indeed spin down to be be only spun up again because of reasons?

@Pross. I haven’t seen the other scripts, but I’ve tested this for 20 days; no issues.
Make sure nothing is accessing your disks:

  1. App volumes: Ensure you don’t have any app data stored on these HDDs (Docker mounts).
  2. Network access: Check that nothing periodically touches the disks (SMB shares, network scans, etc.).
  3. S.M.A.R.T. checks: Disable S.M.A.R.T. and make sure no scheduled checks are running (check crontab or “Cron Jobs” in the WebUI).
  4. WebUI access: Opening “Storage Dashboard” will wake the disks. Opening “Reporting” is fine; it doesn’t spin up the drives.

With these precautions, the disks should stay spun down until actually needed.

Appreciate the ai answer but that does not work in 25.10 for me. They always spin up a couple of mins after spinning down. 24.x does not do this.

Searching the web for “github truenas spindown” you will find the script.

I’m not interested in spindown but i was still curious… I’m not so confident in bash scripting, so i can be totally wrong and in case i will apologize in advance xD but to me not seems that the script doing that

to me seems more that the script check the disks into the group

...
            if [ "$state" != "standby" ]; then
                disks_idle_ok+=("$d")

but at the end will launch the command for every disks

...
    if [ ${#disks_idle_ok[@]} -gt 0 ]; then
        cmd="/usr/sbin/hdparm -y $(printf '%s ' "$(for d in "${disks_idle_ok[@]}"; do get_disk_path "$d"; done)")"
        log "All idle disks > $THRESHOLD s. Running: $cmd"

independently from the group… am i totally dumb?

1 Like

Insults in another language, thats clever! If only I could translate that. Look, the script does not work for me. Maybe it will work for someone else but I’d rather use something that isnt AI generated. Good luck with your script.

1 Like

You are right about the group logic; it was broken:

If even one disk in the group was active, the script should have skipped the entire group, but it didn’t; it, instead, kept adding the disks individually.

Thank you very much; your comment helped me fix the logic properly. It has been updated on GitHub.

1 Like

if you wanna ear my 2 cents, about another little thing i see, i would also consider to be a bit more conservative there:

...
    for d in "${valid_disks[@]}"; do
        disk_statfile="$TMPDIR/${d}_io"

        io=$(read_io "$d")
        if [ $? -ne 0 ]; then
            continue
        fi
...

if something goes wrong, i would instead lock the spin-down of the group and show a properly log. This is IMHO more coerent with a group logic, so more a stile decision than an error/bug

1 Like