SOLVED: Step-by-step process to migrate or upgrade a pool, e.g., 2 disks --> 4 disks

I just moved from my main pool having 2 disks to 4 disks.

I documented every single step and verified it works flawlessly by doing it a second time. This could save you hours if you are running the latest release because it talks about all the “gotchas” caused by bugs in the latest version of Scale and what the workarounds are.

Be sure to follow all the steps especially in moving the system dataset before you export the main pool!

  1. add nvme card to slot near 4 eth ports and chelsio 10G eth card to supermicro opposite the HBA. Put 1G eth cable into IPMI slot. Route the 10G SPI+ DAC cable to the UDM Pro 10G port. Note: SuperMicro documentation is terrible. To remove the riser, you have to lift the two black tabs in the back, and then you can pull straight up on the riser. There are NO screws you need to remove. So now 10G is the only way in/out of my system. Note: unluckily for me, changing ethernet cards like this can cause the Apps not Starting bug later, but I found a fix for that.

  2. Config the NVME disk in TrueNAS as a new pool SSD. I used this to back up my existing data (which is modest at this point)

  3. insert the 2 more disks into the supermicro, but don’t create a pool yet. Make sure they are recognized. They were. Flawless. To pre-test the newly added disks, see: Hard Drive Burn-in Testing | TrueNAS Community. Just make sure you are NOT specifying the old disks… WARNING: the system assigns different drive letters on EVERY boot.

  4. save TrueNAS config (just to be safe). Always good to checkpoint things. 12:22pm System>General>Manage Configuration>Download file

  5. This is a good time to:

  6. stop the SMB service

  7. stop all the apps

  8. unset the app pool on the apps page.

  9. Those steps will make it easier when you detach the pool later (no errors and no retries). It will say “No Applications Installed” after you complete those steps.

  10. In Data Protection, create a periodic snapshot task that snaps Main recursively. Recursively is CRITICAL. You can make the frequency once a week for example. You need this for the next step. I used the custom setting for timing so that the first one would be 5 minutes after I created the task. This meakes the next step easier since I will have an up to date snapshot. You must check “Allow taking empty snapshots.” The reason you need to check this is that the full filesystem replication makes sure every snapshot is there. If you recently snapshotted a dataset which didn’t change, this check will fail to create a snapshot and the sanity check will fail so you won’t be able to replicate the filesystem.

  11. In Data Protection, Replication Tasks, use the GUI to create a new task to copy everything in the main pool to the SSD pool by using the ADVANCED REPLICATION button in Data Protection>Replication Tasks. Transport is LOCAL. “Full Filesystem Replication” is checked. Source and dest are the pool names: Main and SSD. Read only policy is set to IGNORE. You do NOT want to set this READ only. Replication schedule: leave at automatically. Pick the replication task you just configured for the replication task. Set retention to be same as source. Leave everything else at the default. Automatically will start the replication job right after the snapshot job above finishes. So less work for you.

  12. During the full filesystem replica, it will unmount the SSD pool so if you ssh to the system, you will find that /mnt/SSD will be gone. So if you look at the Datasets page and refresh, it will give you an error about not being able to load quotas from SSD. This is only unavailable while doing the full filesystem replica.

  13. If you click on the Running button, you can see the progress. You can also go to the Datasets page and expand the SSD pool and see that it is populating and that the sizes match up with the originals. You can also click on the Jobs icon in the upper right corner of the screen (left of the bells); that screen updates every 30 seconds to show you the total amount transferred.

  14. Note: if you’ve already made a backup once, this process will be very fast since it will only replicate the new data.

  15. In datasets tab, check the used sizes just for sanity check. They may not be identical if one of your pools uses dRAID; the dRAID sizes will be larger than other RAID types because filesizes will be rounded higher. Also, there can be differences if you have existing snapshots.

  16. Migrate the system dataset from main pool to SSD pool (so I have a system when I disconnect the pool) using System >Advanced>Storage> System Dataset pools, then select the SSD pool and save. You’ll need to do that because you’re about to clobber the main dataset the system was using.

  17. Use GUI (Storage>Export/Disconnect) to disconnect MAIN pool (do not disconnect the SSD pool). If you get a warning like “This pool contains the system dataset that stores critical data like debugging core files, encryption keys for pools…” then you picked the wrong pool. Select to delete the data but do NOT delete the config information. The config info is associated with the pool name, not the pool. You will get a warning with a list of all the services that will be “disrupted.” That’s to be expected. Be sure you don’t have any SSH users who are currently using the pool or the disconnect will fail.

  18. For the three boxes, you only check the Confirm Export. There is no need to destroy the data (but you can if you want to do it now). You NEVER want to delete saved configurations from TrueNAS because otherwise, when you bring main back in as main, all the config stuff regarding main (like replication jobs, SMB shares), will be gone from the system config. So you check either the first and third box, or just the third box. If you choose to delete the data on the pool now (which is perfectly fine), then be sure to type the pool name below the checkboxes or the Export/Disconnect button will not be enabled.

  19. The export is pretty quick.

  20. If you followed the process (stopping apps, disconnecting SSH users), you won’t get any errors about filesystems being busy.

  21. It worked! “Successfully exported/disconnected main.”

  22. Storage now shows I have 4 unassigned disks!!! Perfecto!

  23. Now click that “Add to pool” button!

  24. Use GUI (hit “Add to Pool” button on the Unassigned disks, and then pick New Pool to create a brand new main pool with all four disks with same name as my original main pool (including correct case). You will need to use the old name in the “Name” section and select “main” on the section “Select the disks you want to use”

  25. Under Data, I opted for RAIDZ1. So Data>Layout: RAIDZ1. Width is 4 which means to use all 4 disks for the VDEV (it will make the parity disk). Number of VDEVs: 1 (you have no choice). Then click “Save and Go to Review”.

  26. If you didn’t delete the data on the pool, you are going to get a warning “Some of the selected disks have exported pools on them. Using those disks will make existing pools on them unable to be imported. You will lose any and all data in selected disks.” That’s because you’re deleting the data at this point. You could have done it earlier. Same net effect… Deleting the data at this point just minimimizes your time without your main pool data. The pool creating is very fast. You will have a brand new pool: **Usable Capacity:**31.58 TiB.

  27. At this point, look at your SMB shares. They are preserved (even though there is no data on them)!!! So are the snapshot tasks, replication tasks, etc. in the Data Replication page. So looking great!!! It’s all downhill from here! We now just have to replace the data. Note that the SMB shares will all preserved right after the disconnect; there was nothing magical you gained when you added the pool

  28. Just for safety, let’s export the config at this point. (at 1:39pm on 4/26). I always check the box to add the secrets.

  29. Now we get to do what we just did in reverse. This is the time to create a periodically snapshot on the SSD. Essentially, do the snapshot process we did above; a periodic snapshot of the SSD pool just like we did before. This sets you up for the transfer back to the main pool of the data that was there originally. So recursive, custom start time (weekly starting in a 15 minutes) to give us time to create the Replication Task that will start automatically after the SSD full recursive snapshot.

  30. Use GUI again to copy data over from SSD pool to the new main pool. So same steps as before. We need to use the SSD snapshot and reverse the source and destination and use the SSD snapshots. Due to a bug, you need to create this job from scratch. DO NOT try to import the existing job. Otherwise you will get an error like this: middlewared.service_exception.ValidationErrors: [EINVAL] replication_create.source_datasets: Item#0 is not valid per list types: [dataset] Empty value not allowed. I reported this bug.

  31. Hit reload to see your SSD to main Replication Task. It’s status will be pending. It will run automatically right after the SSD recursive snapshot is done. Hit reload and the periodic snapshot tasks will be “Finished” and the “SSD to main” replication task will be running.

  32. You can monitor progress as before by clicking in upper right or on the Running button. And look on the datasets page.

  33. As I mentioned before, with a full filesystem replication, you will not be able to touch the destination from the CLI until the copy finishes. So don’t try to login via SSH: your home directory will not be there since your pool isn’t mounted yet.

  34. After the replication finishes, it’s time to put things back in order

  35. Move the system dataset to the new main pool using GUI using the process used above.

  36. Shares>Restart the SMB service using the dots, then enable all the shares.

  37. Apps> Change app pool to main (settings>choose pool)

  38. Start all the apps you had running before.

  39. If apps wouldn’t start. I covered the fix for Apps not starting here.

  40. Re-enable snapshot task of main. Disable the Replication Task (from SSD → main). And possibly from main → SSD if you want to use the SSD for a different use. Otherwise, as long as your SSD is larger than the data on your new pool, this will provide a nice backup. I migrated my RAID config when I only had 1 TB of storage in use, so I could get away with the backup to/restore from the NVME drive inside the system.

  41. Save system config

  42. Reboot just to make sure there are no issues. There shouldn’t be, but I’d rather find out now than at a less convenient time.

  43. Take a manual full system recursive snapshot of your shiny new pool now that everything works.

  44. Make sure you configured periodic smart checks for your new array. It seems the GUI is broken to enable SMART tests. Go to the CLI and type: smartctl -s on /dev/sda and do for each disk. Then go to the GUI to start the tests. You can close the window after you start the test.

  45. Have a beer. You did it!

1 Like

so when you say upgrade a pool, basically your example is say from existing 2 drives in a pool to add 2 more drives into that pool, ya?

if so then to add more context to that situation

In case others read this now or later, the ZFS RAID-Zx vDev expansion feature is not yet available in TrueNAS, (SCALE or Core). And will likely not be available, officially, in the GUI until 2025 at the earliest.

“Expand Pool” is something else related to swapping disks in a vDev with larger disks.

Unfortunately ZFS was not designed for expanding a RAID-Zx vDev by 1 or more disks. The origins of ZFS were for the Enterprise Data Center where Solaris SysAdmins would add another similarly sized RAID-Zx vDev. Or take an outage and reconfigure.

ZFS, (and by extension, TrueNAS, Core or SCALE), are not the end all of open source NAS software. ZFS does have some limitations, that can bite people. When a new user proposes a TrueNAS configuration, we here in the forums try and get them educated in both best practices, and some of the limitations. But, nothing is perfect.

for myself though inconvenient, this wasn’t an issue preventing me from moving from a qnap qts still using ext4, to truenas scale running on zfs. despite this sort of limitation.

I don’t often reconfigure my pools. I set them once then leave them as is for a very long time. If i need to add new drives or replace, i don’t mind redoing the pool (backup first of course) then restoring the data from the backup.

My friend was on the fence making the switch to truenas so he was asking me about it so i relayed to him. thought i’d mentioned here as well since i just checked the status on this as of 2024 :thinking:

1 Like

Yes, I read that before I did the process. But it didn’t apply to me since I had a 2 disk mirror. The process I described is basically the correct path to reconfigure an existing pool to add more drives.

1 Like

Why did you not add another mirror vdev to the existing pool? That’s perfectly supported and done in a few clicks.

because I get 30 Tb available instead of 20TB.

2 Likes

RAID-Z pool expansion by drive is slated for inclusion in Electric Eel in October

1 Like

i’ll tell my buddy. he is still stuck on qnap. he said he needed this before he could make the switch like i did :smiling_face_with_three_hearts: