I have a pool (“pool_a”) consisting of 2 x 2 TB SSDs in a mirrored arrangement in a PC with only two SATA ports available for storage disks. One of the SSDs in the mirror failed. I was hoping to repair the mirror by replacing the failed SSD with a new 2 TB one. To my disappointment, I found that the new SSD I bought is slightly smaller than the failed one (1.82 TiB vs 1.86 TiB). This appears to make it impossible to repair the original mirror which seems to require disks of the same or larger size).
Fortunately, I bought two of the new SSDs (same make, model and size). I was hoping to build a redundant system again by
Creating a new pool (“pool_b”) with one of the new 1.82 TiB SSDs.
Replicate pool_a to pool_b.
Physically replace the single 1.86TiB SSD of pool_a with the second 1.82 TiB SSD
Add the second 1.82 TiB SSD to pool_b to create a mirror again and let it resilver from the first 1.82TiB SSD.
I’m currently stuck in step 2: Attempting to replicate via a GUI replication tasks, I’m getting only the top-level dataset (“dataset_1”) despite me checking the “Recursive” option on the task (the replication task finishes almost immediately, resulting in an empty /mnt/pool_b/dataset_1 folder when I’d expect it to have folders for numerous child datasets and subfolders).
What might I be doing wrong? Is there an alternative to re-creating a mirrored setup? Searching the web, I find several examples of people doing what I’m aiming for via “zfs send -R pool_A | zfs receive -F pool_B”. However, “zfs” does not appear to be an executable program on my TrueNAS CE system (/bin doesn’t contain “zfs”). Do I need to explicitly create snapshots for the replication to work? My (child-) datasets are ACLed restrictively - is it possible that the replication fails due to that (I’m logged on as truenas_admin when triggering the replication).
I’m a complete TrueNAS/Linux newb and probably missing something super-obvious. Would appreciate if this thread would focus on TrueNAS/ZFS configuration rather than my thin/outdated HW setup.
I don’t know the GUI way, but I do use a command line method to clone the OS pool on my Linux computers, (all of which use ZFS), to alternate media for recovery.
Replace ${MY_SNAP} with a unique snapshot name. Then replace the pool names as well.
When done, you can export the original pool and rename the new pool to be the same as the old pool. That is covered in a Resource forum post if I remember correctly.
Now on the subject of new pool creation. There is some wisdom creating the new pool with reduced features. For example, creating the new pool using an older version of TrueNAS, would allow the use of that older version of TrueNAS.
However, creating the new pool with the latest version of TrueNAS means that any feature that is active and not available on an earlier version of TrueNAS means you can’t ever use that earlier version of TrueNAS. That pool is forward compatible, not backwards compatible.
Note: I have been told that one or more of the Send or Receive parameters are redundant. Since it works anyway, I have not bothered to update my script that does the work.
Thanks Arwen! Any thoughts on why ‘zfs’ does not appear to be an executable program on my TrueNAS CE system? Is this contained in folder other than /bin?
For older versions of TrueNAS CE, yes, the ZFS binaries were in SBIN. However, the commands I listed must be run as user root, which should have the SBIN paths already included.
Thanks Arwen! I’d also discovered the zfs tool in /sbin, and that it required running as root (vs. the truenas_admin account that seems to be only one that I can use to log into the TrueNAS CE web GUI). I also learned how to use ssh from my Windows PC which makes it a lot more convenient to use the CLI tools.
I’ve been able to replicate my pool via ‘zfs send … | zfs receive‘, verified the correctness of the replicated pool and replaced the disk of the old pool with the disk I want to use to turn the new pool into a mirror.
Unfortunately, it seems that that step is not supported by the GUI (the only way to extend the pool seems to be by “concatenating” the new disk to the existing one, resulting in a 4 TB non-mirrored pool instead of a 2 x 2 TB mirror pool like I want).
Searching the web, it seems folks managed to turn a single-drive pool into a mirror pool with a non-trivial sequence of ‘gpart’ and ‘zpool‘ commands. My TrueNAS version does not seem to have a ‘gpart’ tool. Any thoughts on what might work as replacement on current TrueNAS CE distros?
Yes, taking a single disk vDev and adding a Mirror to it is / should be possible in the GUI. I don’t have the details memorized, but I think it is not in the pool part of GUI, but under the vDev part of the GUI. The vDev part of the GUI would be the single disk, which should have a place you can “extend” it.
Anyway, perhaps someone else can walk you through that part. Or the manual might help.
Thanks Arwen! I searched through the GUI again, but still couldn’t find a way to turn a single-drive vdev into a mirror via the GUI. However, I found a way to do this via ‘zpool attach’ (and w/o the gnarly-looking preparatory ’gpart’ commands I found on the web). The key phrase from ‘zpool help attach’ that encouraged me to explore this path was
If device is not currently part of a mirrored configuration,
device automatically transforms into a two-way mirror of device and new_device.
I first attempted zpool attach WhirlPool sdc sdb (with sdc being the disk already in the pool and sdb being the disk I wanted to add to form a mirror. This resulted in
cannot attach sdb to sdc: no such device in pool.
Searching the web for this error showed people using GUIDs they obtained via zdb -l disk (in my case, zdb -l sdc). Using that GUID via zpool attach WhirlPool 1424...5571 sdb seems to have accomplished what I was attempting:
While this may seem to work well, their could be one minor problem. It is hard to explain, but the size of the new disk could be larger than the existing disk. This is NOT a problem for redundancy. But, during replacement of the old disk, it may bite you because you added the new disk in a way that may have used very slightly more space than the existing.
You check with;
lsblk -b /dev/sdc
lsblk -b /dev/sdb
Next, while using the sdb name instead of the GUID for to attach command worked, it likely caused the pool to always use the sdb name. (Unless you manually fix it…) However, having the names mis-matched like this might not cause any problems.
This can be checked with;
zpool status WhirlPool
Both of these are fixable, though you may have to detach the sdb Mirror and re-attached it with either the GUI or partition that is sized correctly.
A reboot may solve the second. Or pool export and import from the GUI may do it. But, then again, it may persist unless you manually fix it.
Thanks Arwen, those are some amazing insights. Indeed, the lsblk output is different:
> lsblk -b /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdc 8:32 0 2000398934016 0 disk
> lsblk -b /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 2000398934016 0 disk
└─sdb1 8:17 0 1998251360256 0 part
… with sdc being the initial disk, and sdb the second I attached to form a mirror.
You were also correct that the mirror was originally referring to the second disk as sdb:
> zpool status WhirlPool
pool: WhirlPool
state: ONLINE
scan: resilvered 396G in 01:35:23 with 0 errors on Sun Nov 2 14:33:42 2025
config:
>
> NAME STATE READ WRITE CKSUM
> WhirlPool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c02e3262-574b-44f9-80f0-763bc076a62a ONLINE 0 0 0
> sdb ONLINE 0 0 0
errors: No known data errors
… and that after a reboot that would change:
> zpool status WhirlPool
pool: WhirlPool
state: ONLINE
scan: resilvered 396G in 01:35:23 with 0 errors on Sun Nov 2 14:33:42 2025
config:
NAME STATE READ WRITE CKSUM
WhirlPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c02e3262-574b-44f9-80f0-763bc076a62a ONLINE 0 0 0
ata-CT2000BX500SSD1_2530E9C75A81 ONLINE 0 0 0
errors: No known data errors
FWIW, the post-reboot ID (“ata-CT…A81”) incorporates the model and serial number of the disk.
I assume the lsblk difference is what you think may interfere with an exchange of the initial disk in the mirror (sdc)? If so, how would I go about correcting this?
The confusion comes because normally TrueNAS partitions the disks, like shown for sdb. But, if sdc was the original and you attached sdb, (and NOT sdb1), then their is no size difference. Just a useless, and possibly confusing partition 1. You could delete it, but many partitioning tools won’t let you do such when the disk is in use.
As for why the pool status now shows a different labeling for sdb, not the GUID, that would take a bit too much time for me to figure out. As long as it works, you are good to go. Just keep that in the back of your mind.