New to TrueNAS/Linux – Need Help with Adding a Drive to My VDEV

Hi everyone,

I’m new here and also new to TrueNAS/Linux, so go easy on me. I’m a quick learner and fairly computer savvy — but I’ll admit, I’m also lazy. I used AI to help me write full shell commands to get things done (yeah, I know, big no-no — it parrots old cached data, yadda yadda).

So my dilema…

I was running a Pi server before, so I know a little about CLI/shell commands. Recently, I built a PC for my personal server and installed TrueNAS Scale along with Emby, the arr apps, torrenting tools, etc. Everything was going great until I ran into a problem — I wanted to add drives to my existing VDEV.

I asked AI about it, and it kept telling me, “Not possible, vdevs are immutable.” Then it said, “VDEV expansion was added but it’s a one-time thing.”

After doing my own research, I came across a post here that said it’s doable and not a one-time thing. I even linked those articles back to AI, and it pivoted, saying I was referencing old data using old TrueNAS versions. Well, no shit, Sherlock.

I asked AI to help me add a drive to my VDEV. I tried using the Web UI first, but it kept silently failing. AI then told me that I could wipe the partition data of the drive I was trying to add because of “ghost ZFS files/partitions that are stubborn to remove.”

I ran lsblk to find the unused drive:

root@truenas[~]# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1
zfs_me 5000 MainStorage 5306980540843714031
sdb
sdc zfs_me 5000 MainStorage 5306980540843714031
sdd
└─sdd1
zfs_me 5000 MainStorage 5306980540843714031
sde
└─sde1
zfs_me 5000 MainStorage 5306980540843714031

I ran the wipe commands — all went through except one, which was “silently failing”:

root@truenas[~]# sudo wipefs -a /dev/sdb
sudo parted /dev/sdb mklabel gpt

Information: You may need to update /etc/fstab.

root@truenas[~]#

I tried the Web UI again to add it. It popped up with the confirmation box and flashed “Expansion completed,” but it was still silently failing (as I later discovered).

root@truenas[~]# zpool status MainStorage

pool: MainStorage
state: ONLINE
scan: scrub repaired 0B in 06:46:06 with 0 errors on Sat Oct 11 12:10:00 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    MainStorage                               ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        e9be84ad-ba28-4dfd-aad1-0728d742df4a  ONLINE       0     0     0
        86dac440-ebeb-4f7e-b6b7-58135f612e71  ONLINE       0     0     0
        d77a8882-ab53-4f3d-b569-10003249f2d1  ONLINE       0     0     0
        ata-ST4000VN006-3CW104_WW66XHF5       ONLINE       0     0     0

errors: No known data errors

I ran zpool status MainStorage to check if the new drive was added — it wasn’t.

So again I asked AI it gave me commands to wipe the partition table of drive sdb I was trying to add, turns out sdb wasn’t the problem.

When I tailed the error logs, I saw this message:

[2025/10/11 05:41:22] (DEBUG) middlewared.plugins.pool_.expand.expand():57 - Not expanding vdev(‘10015679014073816679’): ‘Un able to find partition data for sdc’

So AI told me to replace sdc with sdb, but it didn’t tell me that once I did this, I couldn’t add sdc back to the pool/vdev. My plan was to replace sdc with sdb, clear/create a new partition table on sdb, and then re-add it to the pool.

AI then told me it was impossible to re-add a drive that had been removed via the replace command, because TrueNAS/ZFS stores the GUIDs of drives that were in the pool to prevent accidentally re-adding them — as a sort of fail-safe.

My Questions

Is that true? I’m reluctant to believe AI at this point, considering all the other false or misleading responses I got.

  • Can I re-add the drive somehow?
  • Is it even possible to fix this situation?

PSA: Don’t use AI — it’s useless and turning our brains to mush.

Lesson learned, I guess.

Is anyone here smarter than me? Probably — I mean, using AI already puts me at a disadvantage.

TL;DR:
Accidentally replaced the wrong disk (sdc) in my TrueNAS pool while trying to add another (sdb). Now I can’t seem to re-add it. Looking for guidance on how to recover or reintroduce that drive to the pool.

We will need to sort out what your hardware, OS version and pool layout. Drives can move around upon reboots. SDA can become SDB the next time. Please keep track of the individual disks by drive serial numbers.

Work through getting your Trust level up to allow images and then we can work on everything else with images and/or using CLI and posting results back using Preformatted Text (</>) or Ctrl + e on the toolbar where you type replies in forum.

Browse some other threads and do the Tutorial by the Bot to get your forum trust level up and post images and links

TrueNAS-Bot
Type this in a new reply and send to bring up the tutorial, if you haven’t done it already.

@TrueNAS-Bot start tutorial

1 Like

:point_up:

Also use the GUI whenever possible, and the CLI only to check but not to make changes.

1 Like

@TrueNAS-Bot start tutorial

PC has been running all night so no reboots so sdX assignments should be the same as when I posted for help.

Hardware

  • CPU: 13th Gen Intel Core i3-13100
  • Memory: 16 GB
  • Motherboard: Gigabyte B760 DS3H AX DDR4
  • HDDs: 5 × 4 TB Seagate IronWolf NAS
    • 3 drives purchased separately (show as WW)
    • 2 drives pulled from a Synology NAS (show as ZD)
  • NVMe (OS/Boot): 256 GB Crucial MP500
  • NVMe (Cache/Temp): 1 TB drive used for transcoding temp files, app configs, and metadata
  • OS Version: TrueNAS Scale 25.04 “Fangtooth”

Pool Layout:

sda        8:0    0  3.6T  0 disk - serial WW66XGVW
└─sda1     8:1    0  3.6T  0 part

sdb        8:16   0  3.6T  0 disk - serial WW66G9YK

sdc        8:32   0  3.6T  0 disk - serial WW66XHF5

sdd        8:48   0  3.6T  0 disk - serial ZDHB5MY9
└─sdd1     8:49   0  3.6T  0 part

sde        8:64   0  3.6T  0 disk - serial ZDHB5N5A
└─sde1     8:65   0  3.6T  0 part

nvme0n1    259:0  0  223.6G  0 disk - serial 171479570001233909B3
├─nvme0n1p1 259:1 0  1M      0 part
├─nvme0n1p2 259:2 0  512M    0 part
└─nvme0n1p3 259:3 0  223.1G  0 part

nvme1n1    259:4  0  931.5G  0 disk - serial 2522AG400088
└─nvme1n1p1 259:5 0  929.5G  0 part

It’s my guess that the WW are the 3 x Iron-wolfs I purchased and the ZD are the two Iron-wolfs that were in the Synology NAS.

Setup:

First Pool/RaidZ2: MainStorage is my main pool which consists of 4 x 4 Tb Iron-wolfs - It’s currently resilvering since I did that command from last night so I think that’s why one of the sdX1 is missing. It’s replacing sdc with sdb.

Second Pool: MediaStack-CacheDrive (AI gave me the name I couldn’t think of what to call it.)

Hope this help’s

Appreciate any and all replies.

System has been online for a while now no reboots.

Pool status after resilvering sdc with sdb:

Pool disk status:

Screen of serials matched to sdX names

GUI still doesn’t let me add sdc expand won’t grab the HDD, I went through adding it via the vdev route and that appeared to work, didn’t commit to it because of it not having any redundancy.

I would generally try to figure this out my self and wipe it etc but I think it’s best I wait for someone more knowledgeable than me.

Forum level is still only trust level 1, hopefully you can see these images :sob: