I have a system which has been gradually upgraded up to Electric Eel. A few days ago, a drive in my 6-wide Z1 array has started failing and I replaced it with a spare. During the resilvering process, I noticed that the new drive doesn’t have the 2 GiB swap partition that the others have. Looking further, I found out that in 24.10-BETA.1, the creation and management of swap space was removed. Of course, the old drives still have the default partitions.
I’m now left with an asymmetrical array:
Old drives:
Device Start End Sectors Size Type
/dev/sda1 2048 4196352 4194305 2G Linux swap
/dev/sda2 4198400 35156654080 35152455681 16.4T Solaris /usr & Apple ZFS
New drive:
Device Start End Sectors Size Type
/dev/sdc1 2048 35156654079 35156652032 16.4T Solaris /usr & Apple ZFS
I can’t find a way of fixing it using the web UI, and I’m aware that partitions/filesystems should only be handled with a plan. There are two options: Remake the new drive with swap, or remake the 5 old drives with no swap. I prefer the second option since Electric Eel doesn’t seem to use the swap partitions (according to $ free
) and I don’t even really need it (2 GiB per drive compared to 192 GiB of memory). The array was only created like that because I didn’t notice the default setting (probably the reason for the devs removing swap management outright).
What I’m thinking is to tell ZFS to forcibly discard and resilver the old drives one at a time. I hope there’s a clean way of doing this, otherwise I could just wipe the partition tables and make it try to remount.
Any advice is appreciated!