RaidZ1 Expansion Why so slow?

I’m going from a 6tb x3 RaidZ1 and adding a 4th 6tb drive. It’s been running for a day now and the transfer rate seems to stay at 8.25M/s -9M/s Any idea how to speed this up? or know what’s keeping it from 50M/s :slight_smile:

The pools used space was sitting at 63% before starting. Since stating I’ve stopped all apps and services to no affect.

Check out this topic: RAIDZ expansion speedup

I did the same thing and my expansion speed was increased fivefold

Obviously this will make your NAS extremely sluggish while expansion is in progress

As to why, docs section explaining expansion

What model drives?

Perhaps they are SMR?

All CMR
3x 6TB Ironwolfs
and adding 1 WD Red Plus (WD60EFPX) I cant get the ironwolfs anymore but the WD is also CMR.

I’m currently looking into the above posts.

Expansion is a slow process to avoid hogging too much pool time, but single-digit MB/s is pretty slow even with that considered.

Let’s get the low-hanging fruit and ensure your write cache is enabled:

for disk in /dev/sd?; do; hdparm -W $disk; done

What’s the system configuration including storage controller?

1 Like

Write caching is on for all drives.
The system is a Supermicro X11SSH-F with an Xeon E3-1245v5 and 32GB Ram
All the drives are using the SATA connectors on the mobo

Do you have a lot of very small files or datasets/zvols with a low recordsize?

Check the output of zpool iostat -v to see what kind of load is being put on the disks.

Recordsize is 128k :Edit: The bulk of the storage is Plex media, Audio files and camera backups from cell phones

might be getting warmer. Am i right in thinking the 2 read operations is odd.
zpool iostat -v

That does look a little odd. Can we cross-reference that disk with iostat -x (without zpool, just the regular iostat) ?

Is it possible the parity data just happened to all land on that drive with 128k recordsize. So Truenas is only reading the “data”?

What disk is the new addition and what would we expect to see for the IOSTAT results? Pretty even read/write on the original disks and different on addition or read/write pretty even across all members?

sde is the new disk.
My understanding is when expanding onto the new disk, that disk will mostly have writes and the other disks in the original pool will have reads and writes.

Just noticed your post with IOSTAT -x is showing two different speeds on far right 7200 for sdb, sdc and 5400 for sdd, sde

Do you have two different models of the Ironwolfs?

Sadly yes my two originals were 7200 in a mirror. Now all the new drives in that 6tb capacity are now 5400 and have been for some time. I decided to remake the pool maybe a year or two ago, when I needed more storage as a RaidZ1. Not ideal I know with differing rpm drives, but I didn’t see anyone reporting major issues when doing that. Once I made the RaidZ1 pool I could still read/write to the pool at +300MB/s

Anyone happen to know the command to pause/restart an expansion?
I’m looking at sdd and trying to do a smart test it also seems to be taking forever.

There is this:

raidz_expand_max_reflow_bytes=0(ulong)
For testing, pause RAID-Z expansion when reflow amount reaches this value

I assume setting it to zero again would release.