I have recently installed Truenas Scale and a couple of days ago I was trying to extend my RAIDZ1 hdd-pool of 3 disks to 4 disks. The process started and in the storage section it also shows that the disk has been added to the pool but the pool didn’t increase. If I go to the jobs page, it shows that the pool.attach job is at 25% and it doesn’t progress since almost 3 days. Is this a normal behaviour or is there something wrong?
RAIDZ expansion can apparently take a long time. In essence the pool expansion starts with all existing data on the old number of disks and all the new spare space on the new disk, and it needs to redistribute enough existing blocks amongst the disks so that the spare space is evenly distributed amongst all disks, and it needs to store the mapping of old locations and new locations on disk at every point because it needs to hold this information to be able to find the data in its new location. This is a LOT of work.
(What it doesn’t do is to take e.g. 5 records which have (say) 4 blocks of data and 2 blocks of redundant parity and convert them into 4 records which have 5 blocks of data and 2 blocks of parity and recalculate the checksums - it just moves the existing records and keeps the checksums the same. If you want to reclaim the e.g. 2 blocks of the 30 total blocks that would be freed up by this conversion then you need to run a rebalancing script against your existing data.)
So it can indeed take several days to expand, and it can look like it has stopped for a long time.
That said, please give us the exact model of your disks so that we can confirm they are not SMR drives - if they are SMR drives (which are completely unsuitable for ZFS redundant vDevs), then the drives may time out or may just take a long long long time to finish.
thank you for answer.
I used the line in the shell and that’s the output.
expand: expansion of raidz1-0 in progress since Mon Dec 2 14:52:19 2024
1.34T / 11.9T copied at 4.79M/s, 11.21% done, 26 days 19:12:33 to go
26 days to finish. That’s a long time. I can wait because I have enough space and don’t need to rush. If that’s a normal behaviour, I will just until it’s finished
That’s my hardware
Motherboard: Asrock b450 pro4 r2.0
CPU: AMD Ryzen 5 3600
Storage controller: 4 x SATA3 6.0 Gb/s Connectors, support RAID (RAID 0,
RAID 1 and RAID 10), NCQ, AHCI and Hot Plug* (because I I have also 2 NVME disks connected, I can only use 2 Sata connectors instead of 4)
• 2 x SATA3 6.0 Gb/s Connectors by ASMedia ASM1061, sup-
port NCQ, AHCI and Hot Plug
“command not found” points to “needs sudo to run this command”. for disk in /dev/sd?; do; sudo hdparm -W $disk; done
Hopefully you’ll be asked for password on the first iteration only.
Changing from AHCI to RAID will not cause your NAS to blow up or create world-wide armagedon.
However, if you do this you may well lose access to your pool, and if anything gets written to the drives by the RAID controller, you may not get access again when you switch back to AHCI.
Anyway, the absolute recommendation for ZFS systems is you should run in AHCI mode and should NOT run in RAID mode, so your current settings are correct anyway.
The speed dropped unfortunately even more. Is there anything else I could check?
expand: expansion of raidz1-0 in progress since Mon Dec 2 14:52:19 2024
2.13T / 11.9T copied at 3.66M/s, 17.88% done, (copy is slow, no estimated time)
People who have reported this being slow have eventually reported that it finished successfully. My advice would be to keep waiting for quite a lot longer.
Wait for the expansion to finish - with an SMR disk in the pool it will take a LOT longer to finish than with all CMR - but it should get there in the end.
Once the expansion has completed I would agree with @sretalla that you should replace this disk with a CMR by resilvering.
* WD6003FFBX - CMR Red Pro 7200rpm 256MB cache
* WD60EFAX - SMR Red 5400rpm 256MB cache
* WD60EFZX - CMR Red Plus 5640rpm 128MB cache
* WD60EFPX - CMR Red Plus 5400rpm 256MB cache
So one SMR drive. The others are kind of mismatched in speeds and cache, but hopefully Linux / ZFS will cope with that OK.
thank you for taking the time to double check my disks.
I have actually checked the disks by myself to be sure that all disks are CMR. It looks like I got a wrong info for this disk.
I will definitely replace the disk as soon as the expansion process is finished