Hi everyone,
I’m new here and also new to TrueNAS/Linux, so go easy on me. I’m a quick learner and fairly computer savvy — but I’ll admit, I’m also lazy. I used AI to help me write full shell commands to get things done (yeah, I know, big no-no — it parrots old cached data, yadda yadda).
So my dilema…
I was running a Pi server before, so I know a little about CLI/shell commands. Recently, I built a PC for my personal server and installed TrueNAS Scale along with Emby, the arr apps, torrenting tools, etc. Everything was going great until I ran into a problem — I wanted to add drives to my existing VDEV.
I asked AI about it, and it kept telling me, “Not possible, vdevs are immutable.” Then it said, “VDEV expansion was added but it’s a one-time thing.”
After doing my own research, I came across a post here that said it’s doable and not a one-time thing. I even linked those articles back to AI, and it pivoted, saying I was referencing old data using old TrueNAS versions. Well, no shit, Sherlock.
I asked AI to help me add a drive to my VDEV. I tried using the Web UI first, but it kept silently failing. AI then told me that I could wipe the partition data of the drive I was trying to add because of “ghost ZFS files/partitions that are stubborn to remove.”
I ran lsblk to find the unused drive:
root@truenas[~]# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1
zfs_me 5000 MainStorage 5306980540843714031
sdb
sdc zfs_me 5000 MainStorage 5306980540843714031
sdd
└─sdd1
zfs_me 5000 MainStorage 5306980540843714031
sde
└─sde1
zfs_me 5000 MainStorage 5306980540843714031
I ran the wipe commands — all went through except one, which was “silently failing”:
root@truenas[~]# sudo wipefs -a /dev/sdb
sudo parted /dev/sdb mklabel gptInformation: You may need to update /etc/fstab.
root@truenas[~]#
I tried the Web UI again to add it. It popped up with the confirmation box and flashed “Expansion completed,” but it was still silently failing (as I later discovered).
root@truenas[~]# zpool status MainStorage
pool: MainStorage
state: ONLINE
scan: scrub repaired 0B in 06:46:06 with 0 errors on Sat Oct 11 12:10:00 2025
config:NAME STATE READ WRITE CKSUM MainStorage ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 e9be84ad-ba28-4dfd-aad1-0728d742df4a ONLINE 0 0 0 86dac440-ebeb-4f7e-b6b7-58135f612e71 ONLINE 0 0 0 d77a8882-ab53-4f3d-b569-10003249f2d1 ONLINE 0 0 0 ata-ST4000VN006-3CW104_WW66XHF5 ONLINE 0 0 0errors: No known data errors
I ran zpool status MainStorage to check if the new drive was added — it wasn’t.
So again I asked AI it gave me commands to wipe the partition table of drive sdb I was trying to add, turns out sdb wasn’t the problem.
When I tailed the error logs, I saw this message:
[2025/10/11 05:41:22] (DEBUG) middlewared.plugins.pool_.expand.expand():57 - Not expanding vdev(‘10015679014073816679’): ‘Un able to find partition data for sdc’
So AI told me to replace sdc with sdb, but it didn’t tell me that once I did this, I couldn’t add sdc back to the pool/vdev. My plan was to replace sdc with sdb, clear/create a new partition table on sdb, and then re-add it to the pool.
AI then told me it was impossible to re-add a drive that had been removed via the replace command, because TrueNAS/ZFS stores the GUIDs of drives that were in the pool to prevent accidentally re-adding them — as a sort of fail-safe.
My Questions
Is that true? I’m reluctant to believe AI at this point, considering all the other false or misleading responses I got.
- Can I re-add the drive somehow?
- Is it even possible to fix this situation?
PSA: Don’t use AI — it’s useless and turning our brains to mush.
Lesson learned, I guess.
Is anyone here smarter than me? Probably — I mean, using AI already puts me at a disadvantage.
TL;DR:
Accidentally replaced the wrong disk (sdc) in my TrueNAS pool while trying to add another (sdb). Now I can’t seem to re-add it. Looking for guidance on how to recover or reintroduce that drive to the pool.


