Replacing an apparent bad SSD

I discovered today that my ssd pool was saying degraded. I had a spare drive still in the package. Exact brand and size. When I finally found the culprit after taking every drive out one by one and placing the new drive in, discovered the bad drive was the last one swapped out. Go figure. ( Is there anyway to actually know what drive is bad? Numbers given by TrueNAS did not match anything on the drives) I tried to add it to the pool so I could move forward . I went through the motions of doing so, just to get to the end of the process and get an error saying the drive size is too small. All of the other 7 SSD’s are 1.86 and the new drive, for some reason is saying 1.82. Is there a way to overcome this problem? Thanks.

You know your hardware and setup.
You know what you did.
You know what makes you think one drive may be defective.

We don’t, and we cannot guess from your post.

Please start again with a better, more detailed, description.

2 Likes

Here is the requested info. It was late and I was short on time. I already removed the bad drive as I mentioned in the previous post. The new drive is also installed. It is reporting a 1.82 size and the others are 1.86. Hope this clears things up. I will get any further info needed when I get back home from work. Thanks much.

admin@truenas[~]$ sudo zpool status -v

pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using ‘zpool upgrade’. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:14 with 0 errors on Sat Apr 19 03:45:15 2025
config:

    NAME         STATE     READ WRITE CKSUM
    boot-pool    ONLINE       0     0     0
      nvme1n1p3  ONLINE       0     0     0

errors: No known data errors

pool: scale-vault-ssd
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using ‘zpool replace’.
see: Message ID: ZFS-8000-4J — OpenZFS documentation
scan: scrub repaired 0B in 02:42:48 with 0 errors on Sun Apr 20 14:03:02 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    scale-vault-ssd                           DEGRADED     0     0     0
      raidz1-0                                DEGRADED     0     0     0
        2273329e-107c-42d9-94de-9c16e3f47552  ONLINE       0     0     0
        4006181a-b8a1-493c-8e82-be56ebf3aa39  ONLINE       0     0     0
        9590694318437505551                   UNAVAIL      0     0     0  was /dev/disk/by-partuuid/407aa7f1-7abb-499e-b7f6-bb0521c45653
        d6a2733e-6bb3-4b4e-bb35-ab0a1bfa9c6b  ONLINE       0     0     0
        95ad6cf1-f7ee-4086-ac2c-2e8fd38b4ca8  ONLINE       0     0     0
        f3fc1fc6-1e70-4712-99e7-31780ddc04a6  ONLINE       0     0     0
        fdd7daea-bb7c-4876-8564-78dad62c88ed  ONLINE       0     0     0
        3225b919-7a69-4f48-a561-40d64dd66d1f  ONLINE       0     0     0
    cache
      nvme0n1p1                               ONLINE       0     0     0

errors: No known data errors

pool: strong-house
state: ONLINE
scan: scrub repaired 0B in 06:04:21 with 0 errors on Sun Apr 20 06:04:22 2025
config:

    NAME                                      STATE     READ WRITE CKSUM
    strong-house                              ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        5df6e9f9-220c-4dc5-9277-91867c2ecaaf  ONLINE       0     0     0
        d00d3c21-0fcf-48f0-b58c-0e545be9ccf4  ONLINE       0     0     0
        0916f645-4804-440c-bf19-efa1e2e5b42a  ONLINE       0     0     0
        15293ffe-2e15-45cd-84f6-3cc8fa3f7a3b  ONLINE       0     0     0
        081fbf40-127e-457f-bd94-fb00f247fc43  ONLINE       0     0     0
    cache
      nvme2n1p1                               ONLINE       0     0     0

errors: No known data errors

admin@truenas[~]$ sudo zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
boot-pool 912G 15.6G 896G - - 5% 1% 1.00x ONLINE -
scale-vault-ssd 14.9T 3.35T 11.5T - - 7% 22% 1.00x DEGRADED /mnt
strong-house 36.4T 14.4T 22.0T - - 4% 39% 1.00x ONLINE /mnt

admin@truenas[~]$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
boot-pool 15.6G 868G 96K none
boot-pool/.system 3.13M 868G 104K legacy
boot-pool/.system/configs-f1f5036a6e4448d09a9ddb3c45165866 96K 868G 96K legacy
boot-pool/.system/cores 96K 1024M 96K legacy
boot-pool/.system/netdata-f1f5036a6e4448d09a9ddb3c45165866 2.29M 868G 2.29M legacy
boot-pool/.system/nfs 124K 868G 124K legacy
boot-pool/.system/samba4 444K 868G 288K legacy
boot-pool/ROOT 15.6G 868G 96K none
boot-pool/ROOT/22.12.3.3 2.76G 868G 2.65G legacy
boot-pool/ROOT/22.12.4.2 2.65G 868G 2.65G legacy
boot-pool/ROOT/23.10.2 2.42G 868G 2.42G legacy
boot-pool/ROOT/24.04.2 2.40G 868G 164M legacy
boot-pool/ROOT/24.04.2.5 2.60G 868G 164M legacy
boot-pool/ROOT/24.04.2.5/audit 44.3M 868G 44.3M /audit
boot-pool/ROOT/24.04.2.5/conf 140K 868G 140K /conf
boot-pool/ROOT/24.04.2.5/data 548K 868G 296K /data
boot-pool/ROOT/24.04.2.5/etc 6.80M 868G 5.71M /etc
boot-pool/ROOT/24.04.2.5/home 244K 868G 136K /home
boot-pool/ROOT/24.04.2.5/mnt 104K 868G 104K /mnt
boot-pool/ROOT/24.04.2.5/opt 74.1M 868G 74.1M /opt
boot-pool/ROOT/24.04.2.5/root 272K 868G 160K /root
boot-pool/ROOT/24.04.2.5/usr 2.12G 868G 2.12G /usr
boot-pool/ROOT/24.04.2.5/var 194M 868G 35.1M /var
boot-pool/ROOT/24.04.2.5/var/ca-certificates 96K 868G 96K /var/local/ca-certificates
boot-pool/ROOT/24.04.2.5/var/log 158M 868G 77.0M /var/log
boot-pool/ROOT/24.04.2/audit 2M 868G 2M /audit
boot-pool/ROOT/24.04.2/conf 140K 868G 140K /conf
boot-pool/ROOT/24.04.2/data 92K 868G 292K /data
boot-pool/ROOT/24.04.2/etc 6.74M 868G 5.74M /etc
boot-pool/ROOT/24.04.2/home 0B 868G 136K /home
boot-pool/ROOT/24.04.2/mnt 104K 868G 104K /mnt
boot-pool/ROOT/24.04.2/opt 74.1M 868G 74.1M /opt
boot-pool/ROOT/24.04.2/root 8K 868G 160K /root
boot-pool/ROOT/24.04.2/usr 2.12G 868G 2.12G /usr
boot-pool/ROOT/24.04.2/var 36.4M 868G 34.4M /var
boot-pool/ROOT/24.04.2/var/ca-certificates 96K 868G 96K /var/local/ca-certificates
boot-pool/ROOT/24.04.2/var/log 1.04M 868G 84.2M /var/log
boot-pool/ROOT/25.04.0 2.74G 868G 174M legacy
boot-pool/ROOT/25.04.0/audit 1.67M 868G 1.67M /audit
boot-pool/ROOT/25.04.0/conf 6.97M 868G 6.97M /conf
boot-pool/ROOT/25.04.0/data 284K 868G 284K /data
boot-pool/ROOT/25.04.0/etc 7.02M 868G 6.11M /etc
boot-pool/ROOT/25.04.0/home 136K 868G 136K /home
boot-pool/ROOT/25.04.0/mnt 112K 868G 112K /mnt
boot-pool/ROOT/25.04.0/opt 96K 868G 96K /opt
boot-pool/ROOT/25.04.0/root 160K 868G 160K /root
boot-pool/ROOT/25.04.0/usr 2.51G 868G 2.51G /usr
boot-pool/ROOT/25.04.0/var 46.9M 868G 4.70M /var
boot-pool/ROOT/25.04.0/var/ca-certificates 96K 868G 96K /var/local/ca-certificates
boot-pool/ROOT/25.04.0/var/lib 26.9M 868G 26.6M /var/lib
boot-pool/ROOT/25.04.0/var/lib/incus 96K 868G 96K /var/lib/incus
boot-pool/ROOT/25.04.0/var/log 15.0M 868G 4.07M /var/log
boot-pool/ROOT/25.04.0/var/log/journal 11.0M 868G 11.0M /var/log/journal
boot-pool/ROOT/Initial-Install 8K 868G 2.65G /
boot-pool/grub 8.18M 868G 8.18M legacy
scale-vault-ssd 2.82T 9.58T 229K /mnt/scale-vault-ssd
scale-vault-ssd/.system 4.86G 9.58T 1.54G legacy
scale-vault-ssd/.system/configs-f1f5036a6e4448d09a9ddb3c45165866 73.0M 9.58T 71.4M legacy
scale-vault-ssd/.system/cores 350K 1024M 162K legacy
scale-vault-ssd/.system/ctdb_shared_vol 162K 9.58T 162K legacy
scale-vault-ssd/.system/glusterd 162K 9.58T 162K legacy
scale-vault-ssd/.system/netdata-f1f5036a6e4448d09a9ddb3c45165866 1.89G 9.58T 487M legacy
scale-vault-ssd/.system/nfs 216K 9.58T 216K legacy
scale-vault-ssd/.system/rrd-f1f5036a6e4448d09a9ddb3c45165866 87.6M 9.58T 87.6M legacy
scale-vault-ssd/.system/samba4 5.18M 9.58T 1.01M legacy
scale-vault-ssd/.system/services 162K 9.58T 162K legacy
scale-vault-ssd/.system/syslog-f1f5036a6e4448d09a9ddb3c45165866 30.1M 9.58T 30.1M legacy
scale-vault-ssd/.system/webui 162K 9.58T 162K legacy
scale-vault-ssd/Business 31.6G 9.58T 31.6G /mnt/scale-vault-ssd/Businessscale-vault-ssd/Computer-Docs 17.6M 9.58T 16.2M /mnt/scale-vault-ssd/Computer-Docs
scale-vault-ssd/Dino_iPhone_Photos 3.54G 9.58T 3.54G /mnt/scale-vault-ssd/Dino_iPhone_Photos
scale-vault-ssd/Family 4.72G 9.58T 4.72G /mnt/scale-vault-ssd/Family
scale-vault-ssd/Mens-Discipleship 7.45M 9.58T 7.07M /mnt/scale-vault-ssd/Mens-Discipleship
scale-vault-ssd/NRP-School 3.62M 9.58T 3.59M /mnt/scale-vault-ssd/NRP-School
scale-vault-ssd/Nextcloud_Missions 2.77G 9.58T 2.77G /mnt/scale-vault-ssd/Nextcloud_Missions
scale-vault-ssd/Quantum-Preaching-Class 229K 9.58T 229K /mnt/scale-vault-ssd/Quantum-Preaching-Class
scale-vault-ssd/iso-files 33.9G 9.58T 33.9G /mnt/scale-vault-ssd/iso-files
scale-vault-ssd/media 2.74T 9.58T 2.74T /mnt/scale-vault-ssd/media
scale-vault-ssd/vm-files 162K 9.58T 162K /mnt/scale-vault-ssd/vm-filesstrong-house 8.50T 12.9T 227K /mnt/strong-house
strong-house/Acronis 4.25T 12.9T 4.25T /mnt/strong-house/Acronis
strong-house/Video 105G 12.9T 105G /mnt/strong-house/Video
strong-house/ix-applications 2.54G 12.9T 348K /mnt/strong-house/ix-applications
strong-house/ix-applications/catalogs 170K 12.9T 170K /mnt/strong-house/ix-applications/catalogs
strong-house/ix-applications/default_volumes 170K 12.9T 170K /mnt/strong-house/ix-applications/default_volumes
strong-house/ix-applications/k3s 2.54G 12.9T 2.54G /mnt/strong-house/ix-applications/k3s
strong-house/ix-applications/k3s/kubelet 774K 12.9T 774K legacy
strong-house/ix-applications/releases 1.98M 12.9T 199K /mnt/strong-house/ix-applications/releases
strong-house/ix-applications/releases/cloudflared 881K 12.9T 185K /mnt/strong-house/ix-applications/releases/cloudflared
strong-house/ix-applications/releases/cloudflared/charts 355K 12.9T 355K /mnt/strong-house/ix-applications/releases/cloudflared/charts
strong-house/ix-applications/releases/cloudflared/volumes 341K 12.9T 170K /mnt/strong-house/ix-applications/releases/cloudflared/volumes
strong-house/ix-applications/releases/cloudflared/volumes/ix_volumes 170K 12.9T 170K /mnt/strong-house/ix-applications/releases/cloudflared/volumes/ix_volumes
strong-house/ix-applications/releases/collabora 952K 12.9T 185K /mnt/strong-house/ix-applications/releases/collabora
strong-house/ix-applications/releases/collabora/charts 426K 12.9T 426K /mnt/strong-house/ix-applications/releases/collabora/charts
strong-house/ix-applications/releases/collabora/volumes 341K 12.9T 170K /mnt/strong-house/ix-applications/releases/collabora/volumes
strong-house/ix-applications/releases/collabora/volumes/ix_volumes 170K 12.9T 170K /mnt/strong-house/ix-applications/releases/collabora/volumes/ix_volumes
strong-house/ix-apps 1.19G 12.9T 213K /mnt/.ix-apps
strong-house/ix-apps/app_configs 1.75M 12.9T 1.75M /mnt/.ix-apps/app_configs
strong-house/ix-apps/app_mounts 170K 12.9T 170K /mnt/.ix-apps/app_mounts
strong-house/ix-apps/docker 1.19G 12.9T 1.19G /mnt/.ix-apps/docker
strong-house/ix-apps/truenas_catalog 170K 12.9T 170K /mnt/.ix-apps/truenas_catalogstrong-house/photos 3.24T 12.9T 3.24T /mnt/strong-house/photos
strong-house/xobackup

admin@truenas[~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.9T 0 disk
├─sda1 8:1 0 2G 0 part
└─sda2 8:2 0 1.9T 0 part
sdb 8:16 0 1.9T 0 disk
├─sdb1 8:17 0 2G 0 part
└─sdb2 8:18 0 1.9T 0 part
sdc 8:32 0 1.9T 0 disk
├─sdc1 8:33 0 2G 0 part
└─sdc2 8:34 0 1.9T 0 part
sdd 8:48 0 1.9T 0 disk
├─sdd1 8:49 0 2G 0 part
└─sdd2 8:50 0 1.9T 0 part
sde 8:64 0 1.8T 0 disk
sdf 8:80 0 1.9T 0 disk
├─sdf1 8:81 0 2G 0 part
└─sdf2 8:82 0 1.9T 0 part
sdg 8:96 0 1.9T 0 disk
├─sdg1 8:97 0 2G 0 part
└─sdg2 8:98 0 1.9T 0 part
sdh 8:112 0 7.3T 0 disk
├─sdh1 8:113 0 2G 0 part
└─sdh2 8:114 0 7.3T 0 part
sdi 8:128 0 7.3T 0 disk
├─sdi1 8:129 0 2G 0 part
└─sdi2 8:130 0 7.3T 0 part
sdj 8:144 0 1.9T 0 disk
├─sdj1 8:145 0 2G 0 part
└─sdj2 8:146 0 1.9T 0 part
sdk 8:160 0 7.3T 0 disk
├─sdk1 8:161 0 2G 0 part
└─sdk2 8:162 0 7.3T 0 part
sdl 8:176 0 7.3T 0 disk
├─sdl1 8:177 0 2G 0 part
└─sdl2 8:178 0 7.3T 0 part
sdm 8:192 0 7.3T 0 disk
├─sdm1 8:193 0 2G 0 part
└─sdm2 8:194 0 7.3T 0 part
nvme0n1 259:0 0 238.5G 0 disk
└─nvme0n1p1 259:1 0 238.5G 0 part
nvme1n1 259:2 0 931.5G 0 disk
├─nvme1n1p1 259:3 0 1M 0 part
├─nvme1n1p2 259:4 0 512M 0 part
├─nvme1n1p3 259:5 0 915G 0 part
└─nvme1n1p4 259:6 0 16G 0 part
nvme2n1 259:7 0 476.9G 0 disk
└─nvme2n1p1 259:8 0 476.9G 0 part

If youre replacing the boot ssd then simply save the config file, clean install and import the config.

Hello and thanks for your response. However, it is not the boot drive. I am able to boot into the system just fine. I have eight SSD‘s and one of them failed. After figuring out which SSD was bad I replaced it with the exact brand and size SSD. I powered back on the system and tried to add the new SSD to the pool. I went through the entire process and at the end was told that I could not import the SSD to the pool because the drive size was too small at 1.82. As mentioned above, all the other SSD’s are registering as 1.86 in size so it will not allow me to add the SSD that is showing 1.82. Is there a way to get around this?

Unfortunately that does happen and you are not the only one to experience this. One reason for a SWAP partition is to allow for this fluctuation of the drive capacity. SCALE does not use SWAP but we have made an argument to include either SWAP or to tell the system to use XX MB less so this problem no longer occurs.

Your only option may be to install a larger drive, so long as it is larger then it works.

Wish it were better news.

1 Like

Hello and thanks for your response. I was afraid of that. I already ordered the drive. It was double the price. I had to go to a 4 TB drive.

Mmmh, isn’t sdj1 a swap partion? If this is an older pool, it might still have them?

But not sure how you actually go about preparing a slightly smaller disk so that this works.

Also, maybe worth checking whether the SSD-manufacturer has a software tool that maybe allows changing the overprovisioning, thereby freeing up the littte extra space needed?

The swap/buffer of 2 GB can only accomodate minor variations in drive size.
1.86 vs. 1.82 TiB is about 40 GB.

1 Like

Somehow I completely overlooked that… :joy: