Help/Peer Review ZFS Pool Migration/expansion project

Hey folks this is a long one so thanks in advance!!! This is the first time I’ve ever done an expansion like this and so I’ve been talking with some friends and throwing stuff at multiple AI’s to get a guide for my planned process. I believe I’m at the final version and would love some review for anyone that can take the time to point out anything I missed or any sneaky errors that could cause me problems.

TrueNAS 4→8 Drive RAIDZ2 Migration Guide (v3.3)

Goal: Expand from 4-drive RAIDZ2 to 8-drive RAIDZ2

Method: ZFS → ZFS send/receive

Data Integrity: Bit-perfect, resumable, metadata-safe

Risk Level: Very Low (operator error is the only real risk)

:warning: THE GOLDEN RULES (READ ONCE)

NO WEB SHELL — use SSH only. Browser tabs kill transfers.

USE TMUX for everything long-running.

NEVER DELETE THE SOURCE until verification passes.

NO EXTRA SCRUBS ON BACKUP — they add time, not certainty.

USB DRIVE MUST STAY AWAKE — heartbeat required.

PHASE 0 — Pre-Flight & SSH Setup

Connect via SSH

ssh TrueNas @192.168.1.xxx

Start tmux

tmux

Detach: Ctrl+B, then D

Reattach: tmux attach

Confirm All Drives Are Seen

lsblk

SMART Check New Drives (Recommended)

sudo smartctl -t long /dev/sdX

Wait for completion before migrating.

PHASE 1 — Documentation & Service Shutdown

Snapshot Current Config (GUI)

Screenshot SMB/NFS shares

Screenshot Apps configuration

Capture Pool Metadata

sudo mkdir -p /root/migration_docs

sudo zfs get -r all tank > /root/migration_docs/tank_properties.txt

sudo zfs list -r -o name,mountpoint,quota tank > /root/migration_docs/tank_structure.txt

Stop All Writes

Stop all Apps

Disable SMB / NFS

Verify no open files:

sudo smbstatus

PHASE 2 — ZFS Backup to USB (Authoritative Copy)

Identify USB Disk by ID

ls -l /dev/disk/by-id/ | grep usb

Create Backup Pool

sudo zpool create -m none backup_pool /dev/disk/by-id/usb-YOUR_ID

Snapshot & Protect

sudo zfs snapshot -r tank@migration_backup

sudo zfs hold -r keep tank@migration_backup

:rocket: Start the Transfer (INSIDE TMUX)

sudo zfs send -R -L -v tank@migration_backup | sudo zfs receive -s -F backup_pool/tank

:battery: USB HEARTBEAT (BEST OPTION)

Why this one:

Zero filesystem writes

Keeps USB link active

No metadata churn

Safe for long transfers

Open a Second tmux Pane

Ctrl+B, then "

Run:

zpool iostat -v backup_pool 60

This continuously polls the pool every 60 seconds and prevents WD Elements from sleeping.

:white_check_mark: This is the best option.

:cross_mark: No touch, no cron, no filesystem spam.

Detach tmux and walk away.

:sos_button: IF THE BACKUP INTERRUPTS

Check for Resume Token

zfs get -H -o value receive_resume_token backup_pool/tank

If token exists:

sudo zfs send -t TOKEN | sudo zfs receive -s -F backup_pool/tank

If no token:

sudo zfs destroy -r backup_pool/tank

Restart Phase 2.

PHASE 3 — Verification (THIS IS YOUR CONFIDENCE)

Dataset Count Match

zfs list -r -o name tank | wc -l

zfs list -r -o name backup_pool/tank | wc -l

Must match exactly.

Bit-Perfect Check (THE IMPORTANT ONE)

sudo zfs diff tank@migration_backup backup_pool/tank@migration_backup

:white_check_mark: No output = perfect copy

:cross_mark: Any output = stop and investigate

Mount & Spot-Check

sudo zfs set mountpoint=/mnt/backup backup_pool/tank

ls -la /mnt/backup/media/movies

sudo zfs set mountpoint=none backup_pool/tank

PHASE 4 — Destroy & Rebuild Pool

Destroy Old Pool (GUI)

Storage → tank → Export/Disconnect

:white_check_mark: Check Destroy data

Remove Stale Mount

sudo rmdir /mnt/tank

Create New Pool (GUI)

Name: tank

Layout: RAIDZ2 (8 drives)

Sector Size: 4K (ashift=12)

Verify Alignment

sudo zdb -C tank | grep ashift

Must be ashift=12

PHASE 5 — Restore to New Pool

Start tmux

tmux

Restore Command

sudo zfs send -R -L -v backup_pool/tank@migration_backup | sudo zfs receive -F -d tank

Why -d: prevents tank/tank/… nesting.

:sos_button: IF RESTORE FAILS MID-WAY

sudo zfs destroy -r tank@%recv

Then re-run the restore command.

PHASE 6 — Final Validation

Check Dataset Layout

zfs list -r tank

Ensure:

:cross_mark: No tank/tank

:white_check_mark: Correct mountpoints

Scrub the New Pool (ONLY ONE SCRUB)

sudo zpool scrub tank

This validates all 8 drives post-restore.

Re-Enable Services

Turn SMB/NFS back on

Start Apps

Confirm permissions (they should be identical)

PHASE 7 — Cleanup (DO NOT RUSH)

Wait 7 Days

Keep the USB backup untouched.

Then:

sudo zfs release -r keep tank@migration_backup

sudo zfs destroy -r tank@migration_backup

sudo zpool export backup_pool

Unplug USB.

You didn’t mention hardware and you mentioned Raid-Z2 and USB. A detailed description of your current system and how you plan to expand. How drives are attached, Are you using a HBA and is it up to date and flashed in ‘IT mode’?

AVOID USB

1 Like

Sorry! Some more info on the hardware:

MSI Pro B760M-A DDR4 board. Currently 4 12TB HDD plugged directly into the motherboard SATA ports.

newly installed LSI 9300-8i HBA in IT mode, flashed to firmware 16.00.12.00. Plugged into my motherboard PCIe x16 slot. Only using one forward breakout cable on the HBA for 4 additional SATA ports.

adding 4 new 12 TB HDD’s to those new SATA ports to fill out my Silverstone cs382 case (8 bays total).

so the current 4 drive pool needs to be briefly backed up externally and then destroyed so those 4 drives can now be part of the 8 drive pool.

external drive that I have available to export to is WD Elements 18TB drive - USB 3.0 connection. It’s not ideal but that is my constraint right now so the plan was export to the external drive via usb and then restore to the new pool.

A single drives is okay. Do you only have a single drive to do the temp backup to or is this data kept somewhere else. A bit risky if that 18TB will be your only source for a bit.

Did you look into the RAID-Zx-expansion feature and decide that there were too many downsides to expanding your current 4 wide Raid-Z2 by adding additional disks to that VDEV. Space reporting would be off in the GUI. Command line would be more accurate. The inital data would be stored like it was on a 4 wide Raid-Z2 but all additional data would be written like it was an 8 wide Raid-Z2 once you complete adding drives to the VDEV.
Either choice can be the correct one. It just depends on your preferences and what you want to end up with. You can search that tag to read about the space reporting, etc.

1 Like

Yeah I want to avoid the expansion bug so I decided against that feature. 99% of this data is static - movies and tv and music that doesn’t get rewritten so it just wouldn’t be ideal for the expansion and I didn’t want to mess with rebalancing scripts.

and unfortunately this is the only drive I have for the backup and I don’t have the data anywhere else. So yeah - it’s risky …..hence my nervousness and wanting to really think through the process to minimize risk as I am able to given my hardware constraints.

I know I need to develop a true backup solution eventually - it’s just not there right now. I’m running out of storage space so I used the budget I had for expansion rather than backup. The backup will have to wait for now. I know redundancy isn’t backup …but hoping raidz2 is a little bit safer than nothing! …..don’t even get me started on my piss poor decision to have my apps data on a single NVMe SSD with no redundancy OR backup. That’s just plain me being stupidly and rolling those dice….

I’ll wait for others to comment on your procedure. I think it might be easier and better to just use the GUI and a ZFS replication in the GUI from your current data pool to the USB attached, single drive backup pool. If you don’t use the GUI for almost everything, it can cause problems in TrueNAS.

This is a screenshot VM of TrueNAS but you have to pretend Daisy is your main pool and Masie would be your pool created for backup on your USB drive. This is what I figured you would be doing instead of the command line. You would reverse the replication job to put the data back once you recreate your data pool.

Wait for more experience users to post.

1 Like

I am eager to see if others have thoughts - but thank you for all your time and responses! I think the GUI replication would work …though I’d have less control over the send and receive process possibly and CLI might still be better? I’m unsure / unclear and will have to look into it more.