I tried finding some mention of this issue before posting. I have not been able to find anything. I am not sure if I am searching the correct terminology or not.
My issues has been that multiple of my zpools are not showing up or accessible after upgrading to ElectricEel-24.10.0.2. The zpools are there once i revert back to my previous boot environment.
I was able to upgrade my second server without any issue. Granted, the zfs topology for that second server is different.
Here is my zpool status output for Dragonfish-24.04.2.5, prior to upgrading:
# zpool status
pool: data
state: ONLINE
scan: scrub repaired 0B in 03:58:36 with 0 errors on Sun Nov 3 02:58:42 2024
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
db871200-84a8-44de-bd88-1569a611649c ONLINE 0 0 0
5dfbc200-046e-44f2-92eb-53304e9a569f ONLINE 0 0 0
83d1582e-8dc2-432b-8f9c-6f517a204b8a ONLINE 0 0 0
6e9300cf-d457-45b1-a316-ec4acb795ad6 ONLINE 0 0 0
errors: No known data errors
pool: docker_data
state: ONLINE
scan: scrub repaired 0B in 00:04:33 with 0 errors on Sun Dec 1 00:04:35 2024
config:
NAME STATE READ WRITE CKSUM
docker_data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
e3a27bda-a9c4-4e1e-9b09-4539b400c24e ONLINE 0 0 0
a910de4a-6436-4ff8-9e8d-09cddfcc938d ONLINE 0 0 0
01aa132f-1cee-48b7-bf35-c165fc6812b2 ONLINE 0 0 0
errors: No known data errors
pool: download
state: ONLINE
scan: scrub repaired 0B in 00:00:33 with 0 errors on Sun Dec 1 00:00:36 2024
config:
NAME STATE READ WRITE CKSUM
download ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c7fdda47-2258-4011-883b-d605b738d5d6 ONLINE 0 0 0
ffdccea3-7a8a-4673-807c-12819d9f0fc4 ONLINE 0 0 0
1816451d-b2a4-49c9-8be1-723744010e8c ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:53 with 0 errors on Sun Dec 1 03:45:55 2024
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-SATA_SSD_67F40763162400186094-part2 ONLINE 0 0 0
ata-SATA_SSD_AF34075A182400165065-part2 ONLINE 0 0 0
errors: No known data errors
pool: media-01
state: ONLINE
scan: scrub repaired 0B in 13:47:59 with 0 errors on Sun Nov 3 12:48:03 2024
config:
NAME STATE READ WRITE CKSUM
media-01 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
b2003187-41b2-4350-b3a2-baac7c09da67 ONLINE 0 0 0
77093694-7838-4de6-942c-7b6619413440 ONLINE 0 0 0
7ce9120d-a90b-4cf9-9981-67d899da5ea2 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
0507ee9f-2b57-4429-9f74-7cd98ac3d070 ONLINE 0 0 0
04574f3f-8c2a-4eea-a098-b16bf7fec0e5 ONLINE 0 0 0
0e2c81f0-9af2-42ff-ab72-a124d06e39f5 ONLINE 0 0 0
errors: No known data errors
pool: sec_vids
state: ONLINE
scan: scrub repaired 0B in 00:19:10 with 0 errors on Sun Dec 1 00:19:13 2024
config:
NAME STATE READ WRITE CKSUM
sec_vids ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
65b22c9b-7552-47a8-b29c-c28942a472f0 ONLINE 0 0 0
85d9ea89-680d-4629-9da7-dd4cdeab824c ONLINE 0 0 0
errors: No known data errors
Here is my zpool status output for ElectricEel-24.10.0.2, after upgrading:
# zpool status
pool: data
state: ONLINE
scan: scrub repaired 0B in 03:58:36 with 0 errors on Sun Nov 3 02:58:42 2024
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
db871200-84a8-44de-bd88-1569a611649c ONLINE 0 0 0
5dfbc200-046e-44f2-92eb-53304e9a569f ONLINE 0 0 0
83d1582e-8dc2-432b-8f9c-6f517a204b8a ONLINE 0 0 0
6e9300cf-d457-45b1-a316-ec4acb795ad6 ONLINE 0 0 0
errors: No known data errors
pool: docker_data
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub repaired 0B in 00:04:33 with 0 errors on Sun Dec 1 00:04:35 2024
config:
NAME STATE READ WRITE CKSUM
docker_data DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
e3a27bda-a9c4-4e1e-9b09-4539b400c24e ONLINE 0 0 0
a910de4a-6436-4ff8-9e8d-09cddfcc938d ONLINE 0 0 0
14829080919323958346 UNAVAIL 0 0 0 was /dev/disk/by-partuuid/01aa132f-1cee-48b7-bf35-c165fc6812b2
errors: No known data errors
pool: freenas-boot
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:53 with 0 errors on Sun Dec 1 03:45:55 2024
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-SATA_SSD_67F40763162400186094-part2 ONLINE 0 0 0
ata-SATA_SSD_AF34075A182400165065-part2 ONLINE 0 0 0
errors: No known data errors
You probably want to start by checking if the disks actually show up.
Full hardware details please
Output from lsblk on both DragonFish and ElectricEel
How are the missing pool disks wired to the server?
How are the working pool disks wired to the server?
Certain connection methods are not recommended. Like USB or Thunderbolt. Some people have no trouble whatsoever, but others start off with no trouble using USB attached data pool disks until later. Perhaps like this.
Thank you for reaching out. It does look like my drives are not coming online for some reason. I am using a 24 bay supermicro chasis with a SAS backplane. I have 2 LSI SAS controllers for 16 bays and the motherboard’s SATA for the remaining drives. The missing drives are connected to both the LSI controllers and the motherboard SATA ports. I have moved drives around with not change in outcome. I do not have any USB connected drives. The OS is loaded on two SATADOMs directly connected to the motherboard.
I also have four 4TB NVME SSDs installed on two SABRENT PCIE adapters. However, those drives are still present on ElectricEel.
The LSI controllers are visible via lspci on both Dragonfish and ElectricEel.
I am not an expert in the LSI SAS controllers, but I would check the firmware requirements for both DragonFish and ElectricEel. Who knows, maybe you need to perform a firmware update.
Also, (again, I am no expert), some people have said their are LSI SAS card knockoffs. These are cards that take barely functional LSI SAS controller chips that failed the manufacturer’s testing, but someone got hold of the chips and made PCIe cards with them. They then sold them as if they are fully functional cards. Their are hints here in the forums about them, (both LSI SAS cards and Intel NIC cards).
I finally had some time to play with this. Long story short the SAS9211 support seems to have been dropped from the MPT3SAS module packaged with ElectricEel 24.10 which is using the Linux 6.6.44 kernel. Dragonfish used the Linux 6.6.32 kernel. I found others having issues with these 9211s in various forums. Some have been able to get them to work by recompiling the older version of the module. However, that is not something I want to do. That means that unless the trueNAS team decides to compile the older module or make modifications to the new module, I would be doing this manually every time a new update is available. I am not about that life…!
I hate to unmark a “solved thread” but this is incorrect, there’s no problem with the SAS2008 cards under Linux. @jrgx19 you’re experiencing a problem with your motherboard/BIOS PCI memory reservation as shown by the
can't reserve [mem 0xADDRESSRANGE 64bit]
errors in your dmesg
Results under 24.10.2.2
root@ts430[~]# uname -a
Linux ts430 6.6.44-production+truenas #1 SMP PREEMPT_DYNAMIC Mon May 5 14:30:30 UTC 2025 x86_64 GNU/Linux
root@ts430[~]# sas2flash -listall
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved
Adapter Selected is a LSI SAS: SAS2008(B2)
Num Ctlr FW Ver NVDATA x86-BIOS PCI Addr
----------------------------------------------------------------------------
0 SAS2008(B2) 20.00.07.00 14.01.00.08 No Image 00:01:00:00
Finished Processing Commands Successfully.
Exiting SAS2Flash.
root@ts430[~]# modinfo mpt3sas
filename: /lib/modules/6.6.44-production+truenas/kernel/drivers/scsi/mpt3sas/mpt3sas.ko
alias: mpt2sas
version: 43.100.00.00
license: GPL
description: LSI MPT Fusion SAS 3.0 Device Driver
author: Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>
srcversion: E46E6B71C385F485C977A1B
And just for completeness, 25.04.1
root@ts430[~]# uname -a
Linux ts430 6.12.15-production+truenas #1 SMP PREEMPT_DYNAMIC Mon May 26 13:44:31 UTC 2025 x86_64 GNU/Linux
root@ts430[~]# sas2flash -listall
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved
Adapter Selected is a LSI SAS: SAS2008(B2)
Num Ctlr FW Ver NVDATA x86-BIOS PCI Addr
----------------------------------------------------------------------------
0 SAS2008(B2) 20.00.07.00 14.01.00.08 No Image 00:01:00:00
root@ts430[~]# modinfo mpt3sas
filename: /lib/modules/6.12.15-production+truenas/kernel/drivers/scsi/mpt3sas/mpt3sas.ko
alias: mpt2sas
version: 48.100.00.00
license: GPL
description: LSI MPT Fusion SAS 3.0 Device Driver
author: Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>
srcversion: 24AE11537BBB3E3EDDCB16D
PCI mapping issues can be worked around by adding an optional GRUB command-line parameter to disable the dynamic PCI reallocation but this is not a substitute for using hardware that does not have the problem in the first place.