i tried to run them they give nothing since the hba 9400 does not use sas2flash or sas3flash tool insted it use storcli
here is the output
sudo sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved
No LSI SAS adapters found! Limited Command Set Available!
ERROR: Command Not allowed without an adapter!
ERROR: Couldn't Create Command -list
Exiting Program.
sudo sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02)
Copyright 2008-2017 Avago Technologies. All rights reserved.
No Avago SAS adapters found! Limited Command Set Available!
ERROR: Command Not allowed without an adapter!
ERROR: Couldn't Create Command -list
Exiting Program.
sudo storcli /c0 show personality
CLI Version = 007.2807.0000.0000 Dec 22, 2023
Operating system = Linux 6.6.44-production+truenas
Controller = 0
Status = Failure
Description = Un-supported Command
So it looks like the personality command requires MR (MegaRAID?) firmware to work. One way to interpret that command failing would be that itās not running MR firmwareā¦
I think we just need to focus on the drive sizing issue. One drive reports slightly smaller than the others even though two of the drives are reporting the same model above in post #15. 931.512 GB vs 931.511 GB. I donāt know if taking the drive out, wipe clean and put back into pool is the correct answer to correct the issue or not.
Taking bets now, itās a rogue NTFS related partition that wasnāt properly wiped by TrueNAS when it created the pool. Because we seem to be drowning in those right now.
well i have done that i wiped completely by filling all the sector with random data and same thing still report a different size iam runing a test roght now to see if there is a bad sector i will update you with the result
sudo parted /dev/sde print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sde: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 1000GB 1000GB data
1000GB 1000GB 695kB Free Space
sudo parted /dev/sdf print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdf: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 1000GB 1000GB zfs data
1000GB 1000GB 729kB Free Space
sudo parted /dev/sdf unit B print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdf: 1000204886016B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17408B 1048575B 1031168B Free Space
1 1048576B 1000204140543B 1000203091968B zfs data
1000204140544B 1000204869119B 728576B Free Space
sudo parted /dev/sdd unit B print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdd: 1000203804160B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17408B 1048575B 1031168B Free Space
1 1048576B 1000203091967B 1000202043392B data
1000203091968B 1000203787263B 695296B Free Space
i notice after the reboot the drive name changed from sde to sdd is that a normal behavior
as for now i created a pool with the three good drive i think i will buy a new one to replace this and add it to my vdev later but iam really curious why this happen and why one hard drive show lease space than the other
Yes, thatās normal. Device names are not necessarily static across reboots.
Other than that I donāt know what to tell you, one of your drives appears to be 1081856 bytes (or roughly 1 megabyte) smaller than the rest. TrueNAS should be able to handle a difference like that with some strategic padding, should it not?
If the pool was created using the TrueNAS UI, Iād bug report it.
On second look, the second (smaller) driveās main partitionās file system is not identified as āzfsā by parted, for some reason. I donāt know if thatās relevant.
This is just a guess, but I think what is happening is this:
The partitions themselves are different sizes. In normal circumstances TrueNAS will create identically sized partitions when creating etc. a pool, so it is just flagging this as an anomaly.
ZFS actually uses the size of the smallest partition (or if full disks then disk or file) for the purposes of the space it actually uses.
So this message is not a significant error, just so somewhat unusual that it is worth flagging in case it happens as a result of some unplanned uncommanded change.