RAIDZ2 Expansion - How to Rewrite Data for Full Capacity Recovery?

I’m running TrueNAS SCALE and successfully expanded my 5-drive RAIDZ2 pool to 6 drives using the expansion feature. The expansion completed without issues, but it displayed the following warning message:

The expanded vdev uses the pre-expanded parity ratio, which reduces the total vdev capacity. To reset the vdev parity ratio and fully use the new capacity, manually rewrite all data in the vdev. This process takes time and is irreversible.

My Questions:

  1. What’s the recommended method to perform this data rewrite in TrueNAS SCALE?
  2. Is there a built-in command or GUI option I’m missing, or do I need to use command-line ZFS tools?
  3. How much additional capacity should I expect to gain from the rewrite process?
  4. Are there any risks or precautions I should be aware of before starting the rewrite?

Another question. Is there anything to remind you of the parity ratio? I upgraded and I figured there would be some sort of UI update that says, “parity ratio is not optimal” or something of that nature to remind me in case I forget.

Thanks!

How full was your pool? Do you have a backup of all the data to restore from? You can try searching the forums for RAID-Zx-expansion tag or re balancing scripts discussions.

This may help, but not sure if it’s already on Goldeye:

2 Likes

The command is there in Goldeye Beta1. What is funny, the help info is for BSD.

ZFS-REWRITE(8)                                         BSD System Manager's Manual                                        ZFS-REWRITE(8)

NAME
     zfs-rewrite — rewrite specified files without modification

SYNOPSIS
     zfs rewrite [-rvx] [-l length] [-o offset] file|directory…

DESCRIPTION
     Rewrite blocks of specified file as is without modification at a new location and possibly with new properties, such as checksum,
     compression, dedup, copies, etc, as if they were atomically read and written back.

     -l length
         Rewrite at most this number of bytes.

     -o offset
         Start at this offset in bytes.

     -r  Recurse into directories.

     -v  Print names of all successfully rewritten files.

     -x  Don't cross file system mount points when recursing.

NOTES
     Rewrite of cloned blocks and blocks that are part of any snapshots, same as some property changes may increase pool space usage.
     Holes that were never written or were previously zero-compressed are not rewritten and will remain holes even if compression is
     disabled.

     Rewritten blocks will be seen as modified in next snapshot and as such included into the incremental zfs send stream.

     If a -l or -o value request a rewrite to regions past the end of the file, then those regions are silently ignored, and no error is
     reported.

SEE ALSO
     zfsprops(7)

OpenZFS                                                        May 6, 2025                                                       OpenZFS

It’s just a VM with some virtual hard drives that I am testing in Proxmox to see if TrueNAS is right for me.

My NAS has been running Unraid for many years and I was thinking about switching to TrueNAS. I would like to take advantage of features of zfs. Unraid added support for zfs. But I have reached a point that I don’t like that Unraid makes you use a root user all of the time, docker compose isn’t supported, and other design choices.

The issue is this, I have my important stuff backed up with 3-2-1. But, i also have ~80 TB of stuff that is not important enough to backup but I don’t want to spend months re-obtaining it all. So, I was hoping to create the ZFS Array with most of the disks. Copy the data from a couple of the drives over so that I can then add them to the array without losing their data. I know that this isn’t ideal. But, I can’t afford a backup at the moment.

That is an official command that accomplishes the same things as the scripts that people have been passing around for re-balancing. Am I understanding that correctly?

After reading most of this post:

I think that expansion is just not fully hashed out currently. It seems to have a good way to go before it should be used. I don’t want my file sizes and storage usage to read incorrectly if I decide to resize my array.

Zfs rewrite will rebalance the existing data to the new parity-to-data ratio, recovering you some space, but it won’t resolve the issues with capacity reporting after a pool is expanded, AFAIK there’s no way to fix that for now.

1 Like

Thanks. That’s what I picked up from that. I think I am just going to plan for it in a way that won’t require expansion.