Pool show Mixed

i tried to run them they give nothing since the hba 9400 does not use sas2flash or sas3flash tool insted it use storcli

here is the output

sudo sas2flash -list

LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18) 
Copyright (c) 2008-2014 LSI Corporation. All rights reserved 

        No LSI SAS adapters found! Limited Command Set Available!
        ERROR: Command Not allowed without an adapter!
        ERROR: Couldn't Create Command -list
        Exiting Program.

 sudo sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        No Avago SAS adapters found! Limited Command Set Available!
        ERROR: Command Not allowed without an adapter!
        ERROR: Couldn't Create Command -list
        Exiting Program.
sudo storcli /c0 show personality
CLI Version = 007.2807.0000.0000 Dec 22, 2023
Operating system = Linux 6.6.44-production+truenas
Controller = 0
Status = Failure
Description = Un-supported Command

If anyone needs a primer on the StorCLI commands

StorCLI for Broadcom / LSI cards
https://forums.servethehome.com/index.php?resources/broadcom-lsi-avago-storcli-reference-user-guide.42/

2 Likes

Thanks.

So it looks like the personality command requires MR (MegaRAID?) firmware to work. One way to interpret that command failing would be that itā€™s not running MR firmwareā€¦

1 Like

Maybe storcli /c0 show all would tell more, including the kind of firmware?

I think we just need to focus on the drive sizing issue. One drive reports slightly smaller than the others even though two of the drives are reporting the same model above in post #15. 931.512 GB vs 931.511 GB. I donā€™t know if taking the drive out, wipe clean and put back into pool is the correct answer to correct the issue or not.

Thoughts?

1 Like

Thatā€™s fair.

Taking bets now, itā€™s a rogue NTFS related partition that wasnā€™t properly wiped by TrueNAS when it created the pool. Because we seem to be drowning in those right now.

1 Like

well i have done that i wiped completely by filling all the sector with random data and same thing still report a different size iam runing a test roght now to see if there is a bad sector i will update you with the result

Hmm, so you did.
Does parted show anything interesting if you compare one of the 931.512 drives with the 931.511?

sudo parted /dev/sdX print free
Run it twice, replacing X with the appropriate letter.

sudo parted /dev/sde print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sde: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  1000GB  1000GB               data
        1000GB  1000GB  695kB   Free Space

 sudo parted /dev/sdf print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdf: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  1000GB  1000GB  zfs          data
        1000GB  1000GB  729kB   Free Space

It kind of looks like itā€™s complaining about a 34kB difference, but it might be obfuscated by the unit.

Can you please rerun the commands adding ā€œunit Bā€ like so, so we get the result in bytes instead of GB:
sudo parted /dev/sdf unit B print free

If the zfs rows are identically sized I would report it as a bug in the trigger for the ā€œMixed Capacityā€ warning message.

2 Likes
 sudo parted /dev/sdf unit B print free 
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdf: 1000204886016B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start           End             Size            File system  Name  Flags
        17408B          1048575B        1031168B        Free Space
 1      1048576B        1000204140543B  1000203091968B  zfs          data
        1000204140544B  1000204869119B  728576B         Free Space

sudo parted /dev/sdd unit B print free
Model: ATA ST1000DM003-1SB1 (scsi)
Disk /dev/sdd: 1000203804160B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start           End             Size            File system  Name  Flags
        17408B          1048575B        1031168B        Free Space
 1      1048576B        1000203091967B  1000202043392B               data
        1000203091968B  1000203787263B  695296B         Free Space

i notice after the reboot the drive name changed from sde to sdd is that a normal behavior

as for now i created a pool with the three good drive i think i will buy a new one to replace this and add it to my vdev later but iam really curious why this happen and why one hard drive show lease space than the other

Yes, thatā€™s normal. Device names are not necessarily static across reboots.

Other than that I donā€™t know what to tell you, one of your drives appears to be 1081856 bytes (or roughly 1 megabyte) smaller than the rest. TrueNAS should be able to handle a difference like that with some strategic padding, should it not?

If the pool was created using the TrueNAS UI, Iā€™d bug report it.

On second look, the second (smaller) driveā€™s main partitionā€™s file system is not identified as ā€œzfsā€ by parted, for some reason. I donā€™t know if thatā€™s relevant.

1 Like

This is just a guess, but I think what is happening is this:

  1. The partitions themselves are different sizes. In normal circumstances TrueNAS will create identically sized partitions when creating etc. a pool, so it is just flagging this as an anomaly.
  2. ZFS actually uses the size of the smallest partition (or if full disks then disk or file) for the purposes of the space it actually uses.

So this message is not a significant error, just so somewhat unusual that it is worth flagging in case it happens as a result of some unplanned uncommanded change.

1 Like