Dell R730XD with HBA330 and 6TB SAS2 drives very slow

I’m getting very poor performance from a Dell R730XD, 128GB RAM, 28 cores, with 12/Dell-Seagate SAS2 drives (6Gb/s) and a Dell HBA330, cross flashed from a H330. I am getting really bad performance.

I’ve just started a pool expansion and it says it is going to take close to 10 days to run. See the iostat and status results below.

The entire server is genuine Dell except for the HBA330 was crossflashed from a H330. It has been running for 2 years without a problem, but always felt it was too slow.
An ideas?

root@truenas1[~]# zpool iostat -v Virtual7K
                                            capacity     operations     bandwidth
pool                                      alloc   free   read  write   read  write
----------------------------------------  -----  -----  -----  -----  -----  -----
Virtual7K                                 49.9T  4.69T     27    510  19.8M  19.8M
  raidz2-0                                49.9T  4.69T     27    510  19.8M  19.8M
    12cf6d39-2d10-4e16-b37b-629fa78a23dc      -      -      2     46  1.99M  1.80M
    a08ce428-2eb6-4b0f-bd11-1762e923658b      -      -      2     46  1.98M  1.80M
    13b907b6-ee5b-4001-b39e-6fa4e0e4d162      -      -      2     46  1.98M  1.80M
    6cdbba00-84a9-421f-b756-f72a699180ee      -      -      2     46  1.98M  1.80M
    8243098c-ba9a-4681-a85f-8f4df83b4b6b      -      -      2     46  1.98M  1.80M
    c7cca164-faaf-4a88-86ff-2a55a43a8c06      -      -      2     46  1.99M  1.80M
    fa68b8a6-e2e5-409d-9a38-192dd9f42b2a      -      -      2     46  1.99M  1.80M
    d42c0560-7976-4ac8-8062-6d8ffb1c2e4a      -      -      2     46  1.99M  1.80M
    cf2b0fc9-08a5-4701-9500-b7bf8b92fb17      -      -      2     46  1.99M  1.80M
    9f6ebd0c-d8ff-4fbc-982d-ed24fa8a4b7b      -      -      2     46  1.98M  1.80M
    6138655e-5ba0-40c8-9a77-653a74314859      -      -      0    153     25  5.94M
----------------------------------------  -----  -----  -----  -----  -----  -----
root@truenas1[~]# zpool status Virtual7K
  pool: Virtual7K
 state: ONLINE
  scan: scrub repaired 0B in 15:56:56 with 0 errors on Sun Jan  5 15:57:03 2025
expand: expansion of raidz2-0 in progress since Fri Jan 17 20:48:31 2025
        90.9G / 49.9T copied at 65.3M/s, 0.18% done, 9 days 05:51:42 to go
config:

        NAME                                      STATE     READ WRITE CKSUM
        Virtual7K                                 ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            12cf6d39-2d10-4e16-b37b-629fa78a23dc  ONLINE       0     0     0
            a08ce428-2eb6-4b0f-bd11-1762e923658b  ONLINE       0     0     0
            13b907b6-ee5b-4001-b39e-6fa4e0e4d162  ONLINE       0     0     0
            6cdbba00-84a9-421f-b756-f72a699180ee  ONLINE       0     0     0
            8243098c-ba9a-4681-a85f-8f4df83b4b6b  ONLINE       0     0     0
            c7cca164-faaf-4a88-86ff-2a55a43a8c06  ONLINE       0     0     0
            fa68b8a6-e2e5-409d-9a38-192dd9f42b2a  ONLINE       0     0     0
            d42c0560-7976-4ac8-8062-6d8ffb1c2e4a  ONLINE       0     0     0
            cf2b0fc9-08a5-4701-9500-b7bf8b92fb17  ONLINE       0     0     0
            9f6ebd0c-d8ff-4fbc-982d-ed24fa8a4b7b  ONLINE       0     0     0
            6138655e-5ba0-40c8-9a77-653a74314859  ONLINE       0     0     0

errors: No known data errors
root@truenas1[~]#

Hey Chris, Welcome to the forums.

How full is your pool?

What do you use it for i.e. datasets for fileshares, zvols for block or something else?

What did your expansion look like (going from? - to?)

Is there much or any traffic on the system currently?

The pool is unhealthy at about 90% full, which is why I want to expand it. I’m going from 10/6TB SAS2 drives to 11. All fileshares, no block, and currently no other traffic or jobs running.

… and hi! Thanks for helping

1 Like

So first up I have zero practical experience with ZFS expansion so can’t really comment on how fast or slow it is but I presume its comparable to a resilver. What I can say is that its probably not a coincidence that at 90% capacity you are considering the pool slow. There is no exact % but as a very general rule of thumb somewhere from 80%-90% ZFS has to work much harder to find empty blocks to write data and this often leads to slowness. Best practice is to try and stay below 80% and push it to 90% if you dare. As I say there is no exact % but its a fact ZFS changes its characteristics around these levels. Hopefully once your expansion completes you will start to see some improvement.

Pool expansion IS a long and slow process. And time estimates are unreliable, especially at start.
So all looks to me as it could be expected to be.

1 Like

OK, thanks for your help :slight_smile: