Convert raidz1 to raidz2?

Hey Everyone,

Just joined the community and had a question on my setup.

Community Edition Version:25.04.1

At present I have a pool configured as raidz1 with 4 x 8TB drives. I’ve just acquired 4 x 10TB drives and I was curious. Is it at all possible to modify the existing pool to raidz2 and have the new 4x10TB drives connected to the same pool (I understand that there would be lost data potential as the 10TB drives wouldn’t be able to use full capacity. I was thinking of slowly upgrading the 8TB drives to 10TB)

What would be the best way to achieve this result?

Back up the data, destroy the pool, recreate the pool as RAIDZ2. Unfortunately there no direct way to convert RAIDZ levels.

3 Likes

Maybe you can eval this other ways, a 2vdev 4 wide raidz2, with pro and cons:

  • create a new raidz2 pool with the new disks
  • replicate the data into it
  • stop all services
  • export the old pool without erasing configurations
  • rename the new pool as was the old
  • add a new vdev with the old 4 disks

if i’m not wrong, ca 36tb space vs ca 40 of the 8 wide raidz2).

I think you are. The 8-way RAIDZ2 would have capacity roughly equal to six 8 TB disks, or 48 TB, not 40.

1 Like

this make all less attractive… thanks for correct me!

Thank you for this reply, I really appreciate it. Couple follow up questions.

create a new raidz2 pool: I am liking this idea, after I create the new raidz2 pool with the new disks, would I first create the same matching datasets on the new pool before copying the data? I am also going to assume that Rsync would be the ideal way to replicate the data or would I use the replicate task.

Stop all services: I am presently running portainer through the apps, and all of my docker stacks and containers are managed through portainer. Is it just a matter of stopping all stacks / containers and then stopping portainer through the apps interface. I’m going to also assume I stop all smb shares.

Export old pool: this part seems self explanitory

Rename the new pool to the previous pool: How is this achieved I was looking through the interface and it didn’t stand out as an option.

Add new vdev with the old 4 disks: also seems easy enough .

I will be perfectly happy with the “six” 8TB disks of capacity at 48TB for the additional redundancy of having 2 drive failure tolerance.

The best way IMHO to transfer data in those case is the replication, no need to recreate anything, all is managed by the task having snapshot recursive and with the full file system replication check. Everything will be copied in the new pool, snapshots, permission, property, ecc ecc. ; rsync will not preserve those.

Yes, stop all stacks, stop portainer, do snapshot and replicate, this will prevent inconsistence on apps data.

yep nothing fancy, just not delete the configuration or you will lose all smb shares cronjob ecc

afaik this is not possible via gui, but is not difficult

i’m sorry but as @dan pointed, my previously calc on the storage was wrong.
You have to balance those things well with this “way”, because as is true that

  • the process to transfer data is easier (especially if your backup not talk ZFS)
  • you don’t need to destroy data on the old pool until you want create the new vdev (so you have all time to check after the replication if all is fine)

at the end you will lose a lot of space, 36tb vs 48tb (not 40).

@oxyde I thought that @dan corrected your calculations to say that in the end the total pool size would end up being 48TB and not 36TB as there would be 6 total drives running at 8TB capacity, 6x8=48 with the other 2 8TB for the parity. 8 drives total.

The 48tb Is the space if you use all your 8 disks in a single raidz2 vdev, and not 40 as i previously misscalc. My apologize for this

Ohhh I see what you’re saying and why total capacity would be reduced massively in this setup.

I would have two Vdev’s configured with 4 drives in raidz2 in each vdev. This would mean that close to 4 drives worth of storage are used up for parity.
total usable storage: 27.94TB

Would it maybe make the most sense then to change up my strategy for maximum performance and resilience.

Stop all services, remove SMB shares, Create a new pool with the 4 new drives. Mirrored pool, 2 vdevs to start. Replicate the data from the current array to the new mirrored pool. Add the old drives into the new mirrored pool. From here I just need to modify my stacks/containers storage locations to the new pool, and recreate my smb shares.
Total usable storage: 28.82TB

Using 2 wide mirror layout vs 4 wide raidz2 not change much about space, plus or less we are on 50% of usable space, but to me seems a solution less resilient (you can afford 1 disk lose per vdev instead of 2). At the same there are other pro (easier to add vdev in future, less time for scrub/resilver, more iops).
Btw also with this way Is still valid the pool rename for not lose actual configuration.
IMHO this way was viable if you have bought less disks but larger, like 2, and plan to add another 2 in near future.

Also to consider, how your actual 8 TB disks are old?

Yeah, all of my disks are about 3 years old but all still under warranty. 3 of the 8TB warranty expires in 3 months, 1 8TB is brand new. Four 10TB are under warranty until Nov 2026.

I did end up deciding to go back to 4 wide raidz2 as the fear of losing two drives in a single vdev and losing the whole pool freaked me out. In the middle of running the replication task as we speak.

Really appreciate all your help my dude

1 Like