Performance of expanded vs new pool

Hi everyone,

I need a bit of a clarifier on pool performance relating to expansion. Here’s the pool in question:

pool: Volume1
state: ONLINE
scan: scrub repaired 0B in 13:39:38 with 0 errors on Mon Jul 1 14:14:41 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    Volume1                                         ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/aeefa561-5d44-11e9-aa13-7824af3deb95  ONLINE       0     0     0
        gptid/f4a34f88-5d71-11e9-8e44-7824af3deb95  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        gptid/8ba55022-d3c8-11e9-840e-7824af3deb95  ONLINE       0     0     0
        gptid/96ba999b-e29c-11ee-b121-98b785012d74  ONLINE       0     0     0

errors: No known data errors

It’ a simple striped mirror pool of 8TB drives.
I am in the process of replacing the drives in one mirror with 16TB X18 drives but the performance difference between my old HGST drives and the new ones gave me pause and lead me to ask these noobish questions:

A) when I originally striped mirror-0 to mirror-1 to add usable storage, was the data “equalised” between the two mirrors (like water between two bottom connected canisters) or was mirror-0 filled and mirror-1 was written only when mirror-0 offered no more space? (Yes, this is the noob question as I’m probably compounding stuff from other filesystems)

B) If I replace both drives in one of my mirrors with the new X18s, would I theoretically get a seq read performance equal to the sum of the seq read of all drives and the seq write perf equal of two drives or is that correct only if you set a striped mirror from scratch?

So, I guess what I am asking is: should I wait at replacing one of mirrors in this pool with the new drives, get two more 16tb drives and set up a new pool to copy all data to get better performance or is it irrelevant?

I have been using TrueNas since the days of FreeNas 8.x and I know these questions sound very basic indeed, but I’d love to get a clear, if scalding :sweat_smile:, answer once and for all!

Thanks everyone!

A) No. ZFS will not move existing data to rebalance (but there is a script to do that).

After mirror-1 is added, new data is distributed across vdevs proportionally to the available space on each vdev. Which possibly means, if mirror-0 was already quite full when expanding the pool, that “new data” is essentially on mirror-1 and that the performance of the pool is essentially that a single vdev rather a stripe (old data coming from -0 and new data mostly from -1).

B) No.

Setting up a whole new pool and replicating data to it will give better performance than upgrading the old pool as you go. But you can also upgrade as you go and then run the rebalancing script.

1 Like

Ah, I was not aware of this bash script, thank you for bringing it up:

It makes no difference if it’s run on Core and Scale as long as I have bash, correct?

Now, that’s the answer I was looking for, very clear.

In essence I can replace a mirror today with the new drives but I would need to run the script to get the same performance I would get if I set up a new striped mirror pool from scratch.

Thank etorix!

The script is not even specifically designed for TrueNAS, so Core or SCALE should not matter.