Should a pool auto expand where it can?

Hi all,

I’m on TrueNAS SCALE 24.10.

I originally had 8x 6TB in RAIDZ2, but I have been replacing each of those with 16TB ones. The last one is currently resilvering.

This is the first time I have expanded my pool. What should I expect. Perhaps naively, I have been assuming that ZFS will see there is more space to use and just auto expand the pool. Is this the case? Or is there anything else I need to do once the resilver is finished?

Thanks,
D

1 Like

That’s how it should work, but SCALE has been bugged for a long time. If it doesn’t, see these instructions to fix it:

…and file yet another bug in hopes iX will finally fix this.

Edit: file the bug and attach the debug first.

3 Likes

Supposed to work in 24.10, we’ll see!

The pool auto expanded once the resilver was finished!

Thanks,
D

2 Likes

According to the response on my bug report, TrueNAS does not automatically expand pools. Which would make the documentation wrong…

https://ixsystems.atlassian.net/browse/NAS-131904

Cute thing is when I reported this, and quoted the documentation, I was told to read the docs.

Heh.

Maybe it works in 24.10… in which case the bug response is wrong :wink:

Now which is it.

It has since forever with FreeNAS and with CORE. But with SCALE, it seems iX forgot how to partition a disk or something–I know this issue was present in 23.10 and 24.04, and I think it was present in Bluefin as well, despite in each case a number of bug reports saying it’d be fixed in the next release. I hope they truly have it fixed now.

1 Like

Exactly

Unfortunately, I couldn’t wait until EE was released to expand the pool… and it requires a proper set of disks on actual hardware to verify.

Ergo, it might be fixed in 24.10. I don’t know. I hope so.

BUT as I said, in my bug report, where I quoted the docs, I was told it doesn’t happen automatically… read the docs… and docs says it does.

:man_shrugging:

I really can’t believe it would be a deliberate decision to change that in SCALE, but then I’ve never understood the purpose of the “Expand” button in the GUI either. But based on what OP’s said, sounds like it is working in EE.

I think I may have found the smoking gun…

https://ixsystems.atlassian.net/browse/NAS-132160

Turns out (at least in Dragonfish 24.04.2) how the drive is partitioned depends on how it was introduced into the pool.

So, if you have a mirror of two 4T disks, and you “replace” one with an 8T disk, the 8T disk is partitioned into one 4T partition.

But if you attached the disk instead (by “extending” the mirror), then its partitioned into an 8T partition.

And if you instead attach the disk as a spare… and then that spare is used to auto-replace the 4T disk… then you end up with an 8T replacement, rather than a 4T replacement.

And for ZFS auto-expand to work the disk needs to be partitioned to its full size… not the size of the disk it originally replaced.

I have not verified what happens in 24.10.0.

We have reason to believe your issue has been fixed in the latest stable version available. Please upgrade and report back if the problem persists

That was quick :slight_smile:

Will be a while before I can test it on this hardware, but it should be fairly simple to confirm what’s changed.