Migrating to new HDDs with no spare SATA slot, sanity checks

Hi folks,

Would like to do some sanity checks on my plan here. So I have a Truenas Scale runnings on HP Elitedesk SFF 800 G3. The machine has 1 M.2 slot and 3 SATA slots.

  • Bootpool is using 1 SATA slot (2.5 inch SSD)
  • All Apps + System Dataset are installed on the M.2 NVME SSD
  • Data pool is a striped 2x4TB taking up 2 SATA slots (3.5inch HDD).

I have just got 2x16TB HDD that I want to migrate the Data pool to, Mirrored instead of striped. Just need a way to migrate over because I don’t have any spare SATA slot.

Tools I have in hand

  • one SATA to USB 3.0 cable, but no DC so cannot plug new 16TB HDD straight to the machine
  • Also have another SFF running Windows 11 with only 1 SATA slot available. OS is on NVME slot.

Option 1:
Take bootpool out, plug it in the SATA to USB adapter and run Bootpool off USB until cloning finish
Put 1 of the 16TB in the SATA slot occupied by bootpool
Clone the Data pool over to new pool, say NewData
Export Data pool, remove 2x4TB disks, free up 2 SATA slots
Rename NewData as Data
Install the other 16TB and bootpool drive back to the 2 SATA slots
Mirror Data to the blank 16TB with Extend function in GUI

Option 2:
Install Truenas on VirtualBox on the spare SFF
Install 1 16TB drive to the available SATA slot
Clone Data pool from main Truenas to VM Truenas
Export Data pool from main Truenas, export Data pool from VM Truenas
Swap 2x4TB disks out, put 2x16TB in main Truenas
Mirror Data to blank 16TB
Transfer speed is limited to Gigabit Ethernet

Am I missing anything? Do I just shutdown, take bootpool out and move to USB adapter and Truenas will recognise bootpool and boot from USB in Option 1? Not sure if I can passthrough hard disks and network through VirtualBox in option 2. Ran both scenario by AI and ChatGPT/Deepseek both prefer option 1 due to faster Transfer speed, just need to make sure USB is not flaky. Would like to hear human input.

That :point_up: Do not virtualise TrueNAS in a type 2 hypervisor for production. There are no issues with boot on USB-adapted SSD, this is commonly used for HP microservers and other small devices which are short on SATA ports.

As an alternative path to replication to a new pool, you may replace one drive with the 16 TB, remove the replaced drive, extend the 16 TB into a mirror and then remove[1] the remaining 4 TB single drive vdev from the GUI. This keeps the pool, but data is not safe until the procedure is over.


  1. There is, however, a pending issue with vdev removal and block cloning, so you may want to check if there are already cloned blocks in the pool. If there are none, vdev removal should be safe. ↩︎

1 Like

Probably. How full is your pool currently?

Regardless, don’t “clone” anything. If your current pool is less than half full, then:

  • Remove one disk from the pool through the GUI.
  • Once that operation finishes, power down, remove that disk from the system, replace it with one of the 16 TB disks.
  • In the GUI, replace the 4 TB disk with the 16 TB disk.
  • Once that operation finishes, power down, remove the remaining 4 TB disk from the system, replace it with the other 16 TB disk.
  • In the GUI, attach the second 16 TB disk as a mirror of the first.

Everything’s in the same pool, no other migration, pool renaming, etc. necessary. If your pool is currently more than 50% full, you won’t be able to remove the one disk at this point. So instead:

  • Put the boot device on the USB adapter as you’d suggested
  • Plug one of the 16 TB drives into the now-free SATA port
  • In the GUI, replace one of the 4 TB drives with the 16 TB drive.
  • When that’s complete, remove the other 4 TB drive from the pool.
  • When that’s complete, power down, remove both 4 TB drives, install both 16 TB drives into the system, reconnect the boot device to the third SATA port
  • Add the second 16 TB disk as a mirror of the first, through the GUI as before.

As before, the pool stays the same, nothing to rename or migrate…

4 Likes

Why does this work? You are teaching me something new. Your first step, I would think removing one drive out of your stripe would bring the pool down.

Since some versions back, ZFS lets you remove vdevs from striped/mirrored pools. When you do so, it will copy the data over to the remaining vdev(s), and then remove the vdev in question–kind of like resilvering, but not exactly. But of course there has to be enough free space on the remaining vdev(s) for that data…

Thanks guys. This is very neat, looks like I will be able to do the whole thing via GUI.

I didn’t know you could remove vdev. I did some reading about zpool remove and the main contraint is not being able to remove raidz vdev. I will keep this in mind when expanding in the future.

Wow, I did not know that but that is very good to know. I will have to try that out as it would be very good to advice people who create a stripe on a method to migrate to something else. Thanks for the information!

I would be really cautious with USB HDD under ZFS.
They are not the most reliable solutions.

Agreed, which is why I didn’t recommend one. But putting the boot device on USB isn’t terrible–better to avoid it if you can, but not usually a problem, particularly if it’s only for a limited amount of time.

Please, remeber that the original VDEV is a 2x4TB striped array.
So there is no redundancy here.
And it is potenitally up to 8TB of unprotected data that has to be transferred through USB.

No, no unprotected data is being transferred through USB with either of the courses of action I’ve suggested. Stop spreading FUD.

1 Like

“unprotected” means that no parity is present.
It is the case if you use striped volume, or a single drive.

Yes, of course. The FUD you were spreading was that any data, protected or otherwise, “has to be transferred through USB.” For the third time, neither of the courses of action I recommended, or that anyone else in this thread has discussed, involve transferring data to or from a USB-connected hard drive.

Just quick report back. All done, nice and easy.

  • Truenas picked up bootpool from USB. Didn’t even need to change boot order, though it may vary between systems. This was nessary because pool was more than half full.
  • Replaced 1st 4TB with 16TB, GUI displayed progress of pool replace so that’s handy. When removing vdev (2nd 4TB) the GUI mostly just showed ‘Waiting for removal of ZFS device to complete’ and always stayed at 40%. Actual progress can be checked by going into CLI and use ‘zpool status’. An update so that GUI can show resliver progress of vdev removal would be nice.
  • And of course none of the Data pool drives was connected using USB. Just so we have no confusion here
3 Likes