Extremely slow scrub speeds

I have a pool of 6 12TB HGST drives in a striped vdev config.

NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
StoragePool                               64.1T  36.3T  27.9T        -         -    24%    56%  1.00x    ONLINE  -
  raidz1-0                                32.1T  15.8T  16.2T        -         -    18%  49.3%      -    ONLINE
    a755e11b-566a-4e0d-9e1b-ad0fe75c569b  10.7T      -      -        -         -      -      -      -    ONLINE
    7038290b-70d1-43c5-9116-052cc493b97f  10.7T      -      -        -         -      -      -      -    ONLINE
    678a9f0c-0786-4616-90f5-6852ee56d286  10.7T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                32.1T  20.4T  11.6T        -         -    30%  63.8%      -    ONLINE
    93e98116-7a8c-489d-89d9-d5a2deb600d4  10.7T      -      -        -         -      -      -      -    ONLINE
    c056dab7-7c01-43b6-a920-5356b76a64cc  10.7T      -      -        -         -      -      -      -    ONLINE
    ce6b997b-2d4f-4e88-bf78-759895aae5a0  10.7T      -      -        -         -      -      -      -    ONLINE

The issue is, that scrubbing of the pool is panfully slow. It starts off very fast with speeds over 900 MB/s. But then around 26 TB scrubbed it dropps down to speeds below 10 MB/s.

               capacity     operations     bandwidth 
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
StoragePool  36.2T  27.9T  1.52K      0   890M      0
StoragePool  36.2T  27.9T  1.68K      0   874M      0
StoragePool  36.2T  27.9T  1.40K     35   864M   672K
StoragePool  36.2T  27.9T  1.32K    133   811M  16.8M
StoragePool  36.2T  27.9T  1.52K      0   883M      0
StoragePool  36.2T  27.9T  1.59K      0   921M      0
StoragePool  36.2T  27.9T  1.71K      0   909M      0
StoragePool  36.2T  27.9T  1.57K      0   870M      0
StoragePool  36.2T  27.9T  1.82K      0   891M      0
StoragePool  36.2T  27.9T    975    208  63.8M  20.0M
StoragePool  36.2T  27.9T   1021      0  19.6M      0
StoragePool  36.2T  27.9T    989      0  25.1M      0
StoragePool  36.2T  27.9T    947      0  22.4M      0
StoragePool  36.2T  27.9T  1.01K      0  22.0M      0
StoragePool  36.2T  27.9T    915      0  19.7M      0
StoragePool  36.2T  27.9T    620      0  17.5M      0
StoragePool  36.2T  27.9T    475      0  16.1M      0
StoragePool  36.2T  27.9T    495      0  16.5M      0
StoragePool  36.2T  27.9T    479      0  14.2M      0
StoragePool  36.2T  27.9T    484      0  13.4M      0
StoragePool  36.2T  27.9T    506      0  14.9M      0
StoragePool  36.2T  27.9T    359      0  15.7M      0
StoragePool  36.2T  27.9T    468    310  21.3M  35.7M
StoragePool  36.2T  27.9T    989      0  18.9M      0
StoragePool  36.2T  27.9T    975      0  17.9M      0
StoragePool  36.2T  27.9T   1003      0  18.7M      0
StoragePool  36.2T  27.9T    925      0  18.0M      0
StoragePool  36.2T  27.9T    695      0  17.6M      0
StoragePool  36.2T  27.9T  1.27K      0  6.67M      0
StoragePool  36.2T  27.9T    863      0  4.58M      0
StoragePool  36.2T  27.9T    647      0  4.05M      0
StoragePool  36.2T  27.9T    549      0  4.01M      0
StoragePool  36.2T  27.9T    467      0  2.40M      0
StoragePool  36.2T  27.9T    355      0  3.71M      0
StoragePool  36.2T  27.9T    813    273  4.70M  34.5M
StoragePool  36.2T  27.9T  1.91K      0  9.86M      0
StoragePool  36.2T  27.9T  1.27K      0  6.67M      0
StoragePool  36.2T  27.9T    863      0  4.58M      0
StoragePool  36.2T  27.9T    647      0  4.05M      0
StoragePool  36.2T  27.9T    549      0  4.01M      0
StoragePool  36.2T  27.9T    467      0  2.40M      0
StoragePool  36.2T  27.9T    355      0  3.71M      0
StoragePool  36.2T  27.9T    813    273  4.70M  34.5M
StoragePool  36.2T  27.9T  1.91K      0  9.86M      0

checking the status with the iostat command, it looks like vdev-1 is having a higher usage

The drives dont seem to have any issues reported by SMART

The scrubbing is running for almost 48 hours already and looks like it will further. Also I have the feeling that its getting slower by each scrub. Any idea how to troubleshoot this?

Note: the pool was created by TrueNAS Scale, but then imported into a plain debian machine, because i need to run other stuff on it as well besides being a NAS.

Then it’s not a TrueNAS issue. Thread moved.
Check the temperatures.

1 Like