Hi, Thanks for the suggestion(s).
I tried running <sudo dd if=/dev/zero of=/dev/sdX bs=1M count=10 status=progress> and it appeared to run okay. Afterwards, however, the symptoms were exactly the same as before.
Specifically, with any of the ‘problem child’ disks being the only HDD is the TrueNAS connected, whether via the HBA card of directly to the motherboard, that HDD appeared in Storage and was shown a 1 Unallocated Disk. When I then added the 3 brand new HDDs I got the sound of initialising drives for at least 10 minutes before ending up with the 3 new HDDs listed as Unallocated Disks and the original having disappeared from Storage>Disks again.
I bought the 4 ‘problem child’ HDDs, new, in 2018 and ran them until around July 2025 in a Netgear ReadyNAS RN104. (They were not cheap drives. They were brand new and warrantied by Seagate etc.) As the Netgear ReadyNAS was no longer receiving service or security updates and had become an obsolete model I decided to build a TrueNAS using components from an HTPC.
Motherboard is an Asus Z97A, CPU Intel i5 Core 4670K running at 3.4 GHz, 16Gb Corsair RAM and an SAS HBA card compatible with LSI 9305-16i IT mode, (with an extra HBA cooling fan) and power supply is a Corsair 750 Watt modular supply. TrueNAS is insatalled on a Samsung 500Gb SSD connected directly to the motherboard and there’s a second Samsung 500Gb SSD for apps, on which only Plex is installed just now.
After building the TrueNAS the Pool created using the 4 original HDDs worked perfectly for 5-6 months. Then I started to get error messages from the TrueNAS, by email, telling me the Pool was ‘degraded’ because one of the disks had been removed by the administrator - even though I’d done nothing of the sort. But there were sufficient disks remaining to keep the Pool working. No matter what I tried I couldn’t reverse the position, but I did note that it was different HDDs that had been allegedly removed by the administarator. I tried everything I could find online to try to recover/restore things but nothing worked, usually ending with a message telling me there was an I/O error. Finally the data just disappeared. But fortunately I’d backed it all up onto a JBOD and can repopulate my Pool if I can get it working again.
Eventually I concluded that all I could do was fully format all 4 of the disks from the troublesome Pool and then try to rebuild it. But I’ve never been able to get the disks to work in the TrueNAS, even when replacing 3 of the 4 with brand new ones. (They’re in short supply and I could only get 3 at the time.) As soon as one of the originals is fitted the three new ones are taken down, then the original disappears from Storage>Disks and then the 3 new ones return as Unallocated Disks.
I’ve tried Wipe in TrueNAS, including writing zeroes, which took days, Windows DiskPart>Clean, SeaTools Erase, Full Sanitize with zeroes etc. Absolutely nothing seems to clean any of the 4 original drives so that it can work alongside the 3 new drives. In SeaTools all 4 of the original drives pass both the short and long tests.
I have no exported Pools to Import and I remember that I ‘Destroyed’ the old Pool (which was pool_1) in TrueNAS.
I’m comng to the conclusion that it may be TrueNAS itself that’s the problem. It may have logged and recorded the serial numbers somewhere of the 4 original HDDs knowing they came from a Pool that became corrunpted and failed. And it may, in some way, be preventing me from re-using any of the 4 original HDDs rather than the HDDs themselves having some sort of irremoveable problem.