My hardware
OS: TrueNAS
Chassis: NOX Coolbay VX
Motherboard: Supermicro X11SRM-VF
CPU: Intel Xeon W-2135 6C 12T @ 3.70 GHz 140W
CPU Cooler: Noctua NH-C14S (Noctua 140mm NF-A14 PWM fan)
PSU: XFX TS 750W
RAM: 256Gb Samsung (4 x 64Gb ECC LRDIMM DDR4-2666)
Boot pool: 2 x 64GB Innodisk 3ME4 SATA SSD (mirror)
Pool 1: 5 x 4TB WD RED WD40EFRX HDD (storage pool - RAIDZ2) Use: Samba sharing
Pool 2: 4 x 1TB Use: iSCSI block sharing (mirror)
SLOG: 2 x Intel Optane 900p 280Gb (mirrored - for pool 2)
Pool 3: 2 x WD SSD NVME 750 Black 512Gb (mirror) Use: TrueNAS Jails/VMs
HBA: LSI 9300-16i with IT-mode firmware (up to 16 SATA drives)
Network: Fujitsu D2755-A11 2x10Gbe SFP+ (Intel 82599E chipset) (Use: Samba and TrueNAS jails/VMs access)
Network: Mellanox MCX4121A-ACAT CX4121A Dual-Port ConnectX-4 Lx 25GbE (Use: iSCSI block sharing against VMware ESXi)
Case fan: Coolermaster Silencio FP 120 PWM
Exhaust Fan: 120mm Nox cooler
Fan Control: Hybrid CPU & HD Fan Zone Controller Script
UPS-backed
TrueNAS-13.0-U6.1 freshly installed. (CORE)
Situation: This build is a new build, departing from my old TrueNAS. I reused most of the hardware but upgraded a good part. So the Storage pool and the vmware pool alredy exists in the past. I imported successfully the storage pool and the block sharing pool.
The VMware pool is composed by a mirror of da1 and da2 and the status is working. Now I want to add da0 and da3 to the mirror but
- I’m not sure if this is possible
- If this is possible, I can’t see the drives da0-4 listed on the storage → disks. So I’m unable to select them.
The disk show up with the command camcontrol devlist.
The disk show up with the command geom disk list.
I did
gpart destroy -F /dev/da0
gpart destroy -F /dev/da3
service middlewared restart
In order to see if they show up. Nothing.
Then I did
gpart create -s gpt /dev/da0
gpart create -s gpt /dev/da3
service middlewared restart
But nothing. Restarted the entire system, and nothing.
If I export/import the VMware pool, under status the system lists da1 and da2. But nothing about da0 and da3.
If I export the pool, I can’t see da0-da4. But if I want to import the pool, I can see the pool normally.
Thanks for reading! bye
OK then, the disks still does not show up on Storage → disks but I managed to improve the pool. I decided to expand the 2-way mirror to 3-way mirror.
What I did:
-
Delete the partition table
gpart destroy -F /dev/da0
gpart destroy -F /dev/da3
-
Create the partition table
gpart create -s gpt /dev/da0
gpart create -s gpt /dev/da3
-
Add two partitions (the same way TrueNAS would do automatically)
gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/da0
gpart add -a 4k -i 2 -t freebsd-zfs /dev/da0
gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/da3
gpart add -a 4k -i 2 -t freebsd-zfs /dev/da3
And then
zpool attach <pool_name> /dev/gptid/ /dev/gptid/
Once the terminal finished to run the last command, the GUI almost instantly started to resilver the new disk in the pool. Once it finished the resilver, I did
service middlewared restart
And now I see da0, da1 and da2 disk membership under pool status, but still don’t see them (da0-da4) under Storage → disks.
There is a way to “force” how GUI recognizes the disks?
Thanks for reading
I figured you did more, you r posting was quite good in providing a lot of information.
I’m stumped. The only things I could suggest is to upgrade to TrueNAS 13.3 however I doubt that would have any impact, or upgrade to SCALE to see if that has any affect. If SCALE fails to see it as well, there must be some hardware incompatibility.
With all that said, DO NOT UPGRADE YOUR ZFS FEATURE FLAGS, as it will prevent you from restoring to the original configuration should you choose to.
You could submit a bug report however I’m not sure if iX will put the effort in to fix it, should a fix exist. Nothing against them, SCALE is the path they are taking and they have good reasons for that. Someday I will switch to SCALE completely as well.
If you find a solution, please post it. This is an odd problem and while I doubt many people will have it, someone might.
Yeah I’m not pretty confortable switching over GNU/Linux. In fact, 5 years ago I migrated all my “infraestructure” to FreeBSD on purpose. I use pfsense as router, truenas as NAS and vmware esxi as VM provider.
I’m very happy with CORE as it is. I don’t need SCALE features at all, but well… its loos like the trend is unavoidable
I can live so far with this bug or whatever. I will study it after a few updates before doing big movements.
And no, I didn’t updated the flags, but for a reason. I don’t fully understand what are those flags, and why I need to update. So first I will read about that and then make a move.
Thanks for answering! have a nice weekend
They are indications of what the current ZFS version features can support. Take a look at the link provided below.
The issue with accepting the new feature flags is that you will not be able to use an earlier version of ZFS (TrueNAS in this case) so you are stuck using the current or later versions. So I tell everyone to not upgrade those feature flags unless they have a specific need to do that, but also wait a month or longer to ensure there is no reason to roll back to the previous version.
https://openzfs.github.io/openzfs-docs/Basic%20Concepts/Feature%20Flags.html
I’m with you 100%. I am very comfortable with CORE, however I have a second test system that has SCALE on it and I’m learning.