I have a small NAS for Documents sharing running Dragonfish-24.04.2.2 on a Supermicro X9 with Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz and 16G RAM.
Initially built a 3x4TB Raidz1. Today off lined one by one and replaced them all, but the available space still unchanged.
Scale kept the original partitions, as the original disks were 4TB. After finish my rsync, I’ll run gparted to increase the partitions and see if I can shot my foot or fix the problem. Any ways, IX should look into this, as I never had any issues with doing it with Core.
Yes, they should. They’ve had multiple bugs filed and said it’s fixed. I don’t know why they can’t get this right all of a sudden. Please file another.
Once you have, the command to fix the partition sizing would be, IIRC, parted /dev/sda 1 100%.
select the disk by running parted /dev/sdX, where X is your disk.
i.e:
parted /dev/sda
You should see:
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.´'
List the partition
(parted) print
The output in my case:
Disk /dev/sda: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2097kB 4001GB 4001GB zfs
My drive is a 8TB, so its using half of that.
List the free space:
(parted) print free
Model: ATA WDC WD80EFPX-68C (scsi)
Disk /dev/sda: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 2097kB 2080kB Free Space
1 2097kB 4001GB 4001GB zfs
4001GB 8002GB 4001GB Free Space
In this case shows 4TB free after the ZFS partition.
To expand it:
(parted) resizepart 1 8002GB
Note that no output will be provided by parted, so I confirmed with:
(parted) print free
Model: ATA WDC WD80EFPX-68C (scsi)
Disk /dev/sda: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 2097kB 2080kB Free Space
1 2097kB 8002GB 8002GB zfs
Repeat for all impacted drive
Disclaims:
(1) Save your data first
(2) Use at your own risk