we own a TrueNAS Core server (13.3) with 3 Drive Bays (3x LSA PCIe HBA) (12x 3,5 inch S-ATA HDD each) totalling 36 drives. When the server was installed a few years ago, the first 12 drives were added and recognised as “normal” devices - no multipathing was applied to them. We updates TrueNAS Core from time to time and when space began to ran out, we added another 12 drives. Those were (no matter what we tried) always recognised and configured as multipath devices. The same happened another year later when we added the last 12 drives - also multipath devices.
Multipathing is nothing we want, nothing we understand and nothing that makes it easier to work with TrueNAS Core - so we want it gone. Now that SCALE will be the “successor” of CORE we want to migrate ASAP. In the old TrueNAS Forum there are plenty of topics/postings that are all about Problems with existing Multipath configs migrated to SCALE - and they are telling, that multipathing was killed / deprecated since 13.0/13.3…
Could someone please tell us/help us on how we can get rid of this annoying “feature” and remove/disable it completely on our running 13.3 Server - but without losing data, to ease and secure the migration to scale?
Note - we are not Unix/Linux professionals so please go easy on us
Thanks in advance - Yours GIGGLYBYTE
So it sounds like you have 3 x LSI HBAs in your server head each connecting to 12 disk JBODs. The first 12 disks you added were SATA which doesn’t support multipathing but presumably the additional drives you added were SAS which does hence why you have the current situation.
Could you provide system specs and also the output of zpool status. Could you also explain exactly how the HBAs are cabled to the JBODs ie two cables per HBA to each JBOD etc.
Hey there - thanks for answering… We got 3x LSI HBA connected to 12x 3,5" S-ATA Drives each. There are and were NO SAS drives present in our system. All drive-bays are connected to one HBA with one cable only. zpool status will be posted tomorrow - when back @ work
Hi there, this is one of our Multipath problems… those disks went unavailable in the second we upgraded from 13.0 U6.7 to 13.10 U1.1. That is our 2nd Problem - we want to reintegrate/fix those disks, because they are not dead at all. We don´t know why they became unavailable.
Im not understanding the multipath issue you mention as all your drives are SATA and not capable of multipath. Also you say the JBODs are only cabled to the HBAs via a single cable making multipath impossible. A screenshot of the UI as mentioned above may help shed some more light.
Upgrading from 13.0 to 13.3 was only really designed for users of FreeBSD Jails everyone else was probably better staying where they were on 13.0 or migrating to SCALE. Did you need the Jail functionality?
Can you share the specs of your server including HBA model and the JBODs. Also can you confirm or even share a screenshot of exactly how the SAS cables are physically connected between HBAs and JBODs.
You are allowed to make fun of us now - there are 2 cables from each of the 3 HBA to each of the 3 JBODs - sorry, we were 100% certain, that there is only one - our bad…
Are you sure you don’t accidentally have one of the cables from HBA2 going into JBOD3 (as well as JBOD2) and one of the cables from HBA3 going into JBOD2 (as well as JBOD3)?
I’d be tempted to power down and remove one cable from each HBA. Power up and lets see how things look.