P.S. Not an urgent point however use of a Sandisk USB Flashdrive is not a great idea:
TrueNAS SCALE boot drives get frequent writes and a flash drive will not last long (but USB SSDs are fine in this regard).
USB boot drives are not officially supported by iX/TrueNAS because they less reliable and some ports get USB disconnects at which point your NAS will hang. I know this because I myself use a USB boot drive and was experiencing this happening regularly - but fortunately I was able to use a different USB port where it doesn’t happen. But when you have limited SATA ports, sometimes this is a choice that you have to make.
There are several manufacturers that sell USB SSDs that look just like flash drives but are actually SSDs. I am not sure whether yours is one of them.
lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE START SIZE PARTTYPENAME PARTUUID
sda WDC WD20EZAZ-00GGJB0 1 gpt disk 2000398934016
├─sda1 1 gpt part 128 2147483648 FreeBSD swap 38821853-ae6b-11e9-9bf2-9418823802ac
└─sda2 1 gpt part 4194432 1998251364352 FreeBSD ZFS 388d8965-ae6b-11e9-9bf2-9418823802ac
sdb WDC WD40EFAX-68JH4N1 1 gpt disk 4000787030016
├─sdb1 1 gpt part 128 2147483648 FreeBSD swap bb51a74b-396b-11ee-bf0b-9418823802ac
└─sdb2 1 gpt part 4194432 3998639460352 FreeBSD ZFS bb6c9936-396b-11ee-bf0b-9418823802ac
sdc WDC WD40EFPX-68C6CN0 1 gpt disk 4000787030016
├─sdc1 1 gpt part 128 2147483648 FreeBSD swap 558fa116-154c-11ee-94a9-9418823802ac
└─sdc2 1 gpt part 4194432 3998639460352 FreeBSD ZFS 55a4f47b-154c-11ee-94a9-9418823802ac
sdd WDC WD40EZAZ-00SF3B0 1 gpt disk 4000787030016
├─sdd1 1 gpt part 128 2147483648 FreeBSD swap e3d84d05-1294-11ee-a2c5-9418823802ac
└─sdd2 1 gpt part 4194432 3998639460352 FreeBSD ZFS e3f3f4ec-1294-11ee-a2c5-9418823802ac
sde USB DISK 3.0 1 gpt disk 31029460992
├─sde1 1 gpt part 4096 1048576 BIOS boot 4f886359-e194-4d63-9e19-2e884fcb0f83
├─sde2 1 gpt part 6144 536870912 EFI System c9ce6147-dbcc-427d-84f2-5825e256e9a4
└─sde3 1 gpt part 1054720 30489427456 Solaris /usr & Apple ZFS 6c05e08f-2e39-43e7-835e-eaf0e47602d7
sudo zpool status -v
pool: MyNAS
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0B in 18:23:19 with 0 errors on Sun Feb 16 18:23:30 2025
config:
I did try using a Transcend 256GB USB SSD drive but TruneNAS Scale would not boot. It only booted when I reverted to my original 32GB USB flash drive.
But now I’ll give it another try.
And yes I do not have a spare SATA port. Though I did succeed booting with a 256GB NVMe SSD but that was a waste of precious SATA port. Thanks
A very unsafe stripe pool (lose any drive, lose ALL).
One SMR drive (EFAX).
And I don’t even know how this could have happened. ashift=9 ???
To replace a drive you absolutely need an extra SATA port.
What you can do, is to remove a drive (vdev) from GUI, which will copy its data to the other drives (SMR will make that a pain…); when it’s done, you can physically remove the drive and add a new drive to the pool. The result will be an imbalanced pool, with three older drives that are quite full and a new empty drive. (You could as much make the new drive a new single pool, which would at least contain the damage in case of failure.)
Really, the best way would be to rebuild your storage from scratch.
See - having hard data helps a LOT in understanding what is going on.
@etorix is absolutely right about having a non-redundant striped pool and not only is it “lose one disk, lose ALL” but also you don’t get any error recovery if you ever get checksum errors.
However, all is not lost. With a stiped pool like this you can remove a vDev if there is sufficient space to move the data to other drives - and you have almost 5.5TB free and need to remove a 2TB drive. So you should be able to ZFS remove (from the UI) the 2TB drive and wait for the data to be moved. This should also fix the block size issue. You will be left with a 3x 4TB striped pool with c. 3.5TB free space - still completely non-redundant and risky, but with one SATA port free that you can use to reconfigure the pool.
If you are prepared to live with the (really pretty significant) risk of losing all your data when any one of the drives fail, you could replace the 2TB with a 4TB drive and use the UI to add it as a new vDev and then you would have c. 7.5TB of free space on a non-redundant pool.
Alternatively, a more expensive UI-achievable solution is to move to a 2-vDev mirrored pool using 2 of the existing 4TB CMR drives as one mirror vDev and 2x new 8TB/12TB drives as a second mirror vDev. You would retain the 2TB drive and the 4TB EFAX drive for non-NAS usage. You could potentially return the new 4TB drive you already have for a refund. But you would need to buy 2 new NAS-spec drives of at least 8TB each.
If this is affordable, then sort out your hardware and we can help you achieve it.
All other solutions will also require more disk purchases, and either moving the data elsewhere over the network and then back again (which assumes 13TB of spare space on other systems, or great technical CLI expertise or borrowing a motherboard from someone which has more SATA ports.
For example if you replace the EFAX drive with another new 4TB CMR drive I can see a pretty lengthy and very technical command-line approach to migrating to a RAIDZ1 solution which would leave you with a 4x 4TB RAIDZ1 solution with c. 12TB of useable space and c. 3.5TB of free space, but I am very unclear whether @madin3 has the technical skills to carry this off, and you would have less free space than at present. If you want a RAIDZ solution with more spare space then you would need to buy all new drives of at least 4x 6TB for RAIDZ1 and significantly bigger drives to move to RAIDZ2.
If you want more SATA ports for more internal drives, then you will need a new MB as well as a new case - in other words an entirely new server.
So to some extent, this is probably a good point at which to choose between a small hardware upgrade within the existing box, or building and migrating to an entirely new server which will meet your needs for e.g. the next decade.
Looks that way. Will shop around and when I do have my new gear will reach out for your excellent help. Any suggestions on hardware would be most welcome. Fingers crossed until then. Thanks a bunch all.
That depends on your use case and requirements…
Just serving files?
How much space/how many drives do you want?
Your current server is not bad, as long as 4 drives is enough.
The issue is that the drives are generally small by today standards, probably old, and one of them is SMR. You need a set of (much) bigger drives (say, 4*12 TB, giving 24 TB raw/ca. 20 TB usable as raidz2 or stripe of mirrors) and then a strategy to migrate.
Exactly. I agree with this and pretty much everything else @etorix said. Except with 24TB storage you will get c. 20TiB useable which is not that much less.
4x 12TB as mirror pairs is an easy migration.
Or just as easy you could start with 2x 12TB drives new, and reuse the 2x 4TB CMR drives for 16TB useable space and then buy 2 more 12TB drives to replace the 4TB drives later when you need more space.
It will only get more difficult to migrate if you want to move to RAIDZ.