Correct way to increase the capacity of an existing pool

I have a pool that was originally set up with a mirror of 2 HDs each 2TB. One of the disks failed and I replaced it with 8TB disk. The way I replaced it was:

  1. Mark the bad disk as offline.
  2. Physically attached the new 8TB disk
  3. Assign the 8TB disk to the existing pool
    The pool is still 2TB in capacity after resilvering.

Now I just bought another 8TB disk hoping to replace the remaining 2TB disk, with the end result of an 8TB pool utilizing all the disk space on the 2 mirrored drives.

The questions I have:

  1. There is a “Replace” button on the UI. Should I use that instead of the “detach” etc steps?
  2. There is also an “Expand” button next to my existing pool. Does it have any role in my end result, e.g. should I first “replace” 2TB disk with 8TB, then click “Expand”, or does Truenas scale do the expanding automatically?

Based on other people’s comments previously, I think Truenas should automatically increase the capacity of the pool once I replace the 2TB disk with 8TB, but I am unsure what does the “Expand” button do.

It does it automatically.

The “Expand” button is sort of a relic of the past, before the default “autoexpand” feature was set on pools.


Don’t use “detach” for this.[1] Either “offline” → “replace” (if you require a reboot and physical swapping of drives), or just skip to “replace” if you have the available slots in your case to house all the drives.


  1. “Detach” will downgrade the mirror vdev to a single disk stripe. ↩︎

2 Likes

OK. I just replaced the 1.5TB disk with 8TB.
I first marked the 1.5TB disk as offline.
Powered off the machine, and connected the cables to the 8TB new disk.
Then start TrueNas, and clicked “Replace” on the offline disk, and selected the new 8TB.
After the resilvering, the pool still shows 1.5TB capacity.

There does not seem to be an automatic increase of capacity after.

Did I do something wrong? Do I click Expand button now to get the full 8TB capacity?



OK, I just tried “Expand” and it increased the pool capacity to 8TB.

I don’t know what I did wrong but there was no automatic expanding of pool capacity after I replaced the remaining smaller disk


with a large one.

That is odd.

What does this show:

zpool get autoexpand Photos

Now I’m more confused. By all means, it should have expanded the capacity automatically upon replacing the last drive.

autoexpand has been set by default since forever, but apparently iX has forgotten how to partition a disk to use all its space when doing a replacement. This bug has persisted through several versions of SCALE now, and they’ve (falsely) said it’s been fixed at least a few times. I thought it was fixed in 24.10, but perhaps not.

2 Likes

Wait. Are you saying this is SCALE-specific bug and doesn’t happen on CORE?

Absolutely! Autoexpand-failure is an exclusive SCALE feature.

1 Like

AFAIK, no version of CORE is affected by this longstanding SCALE failure.

Hi there,

I’m new to Truenas (and this forum) and in the middle of the process of replacing 2x 3 TB HDDs with 2x 8 TB HDDs on my Ugreen DXP2800. After finding this thread I thought, I might chime in.

I’ve verified that the pool is set to autoexpand:

zpool get autoexpand ugreen-data
NAME         PROPERTY    VALUE   SOURCE
ugreen-data  autoexpand  on      local

I’ll let you know how it goes - that is, if you’re interested in the results :slight_smile:

1 Like

You’re welcome. But, as stated, the expected result depends on the TrueNAS version you’re running.

1 Like

It’s TrueNAS Core - 25.10.1 - Goldeye

Well, it didn’t work automatically

zpool list ugreen-data
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ugreen-data  2.72T   848G  1.89T        -         -     0%    30%  1.00x    ONLINE  /mnt

but manually expanding the pool worked like a charm:

pool list ugreen-data
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ugreen-data  7.27T   848G  6.44T        -         -     0%    11%  1.00x    ONLINE  /mnt

TrueNAS Core 25.10.1 - Goldeye apparently still has the autoexpand failure.

…because it is NOT Core :face_with_tongue:

Is it really that hard to fix? Why would it hang around for so long? Or is this a product of ZFS not being first-class citizen on Linux?

I was well aware that this kind of issue doesn’t exist on Core, whereas it was supposed to exist on Scale.
So my TrueNAS device lived up to its expectation :laughing:
But as I said, I’m new to TrueNAS and this forum and this was the most I was able to contribute - a test regarding autoexpand :innocent:

It’s deliberate:

3 Likes

I.e. “it is a feature, not a bug.” (Some here may beg to differ.)