Pool exported after restart, I/O error when importing

Hello community, I joined the truenas community about a year ago, my server configurations are:
Truenas Scale: Dragonfish-24.04.2
Processor: Xeon E5-2690 v4
Memory: 128Gb ECC
Motherboard: X99-QD4
1 Pool Mirror: 2x1Tb(NVME KingSpec),
1 Pool Mirror: 2x 3Tb (WD Red),
1 Pool Stripe: 2x7Tb (IronWolf),
1x6Tb (USB Seagate EDD),
1x3Tb (WD Red),
1x1Tb (Samsung),
1x1Tb(USB WD)

I am aware that stripe is not a recommended configuration, but I do not store anything of utmost importance in the stripe pool, only movies and series, which I can download again, but unfortunately I put a 1 Tb disk in it, which caused this problem I believe, after I restarted the server the stripe pool appears as if some disks in the pool had been exported, I tried to perform some procedures in the forum, and I only receive this message.

image

So I exported the pool definitively through the truenas UI, and tried to run the same procedures, and I get the same error message.

I ran smartctl on all disks in the pool, only the 1tb disk showed a pending bad sector.

Can anyone tell me if I have any chance of recovery? I have no problem losing the data, I just didn’t want to download everything again.

Sample of the commands executed

zpool import

image

zpool import -f -F -m

image

zpool import -fFX

It takes a long time to execute, I left it for 3 days and it didn’t finish

zpool status -v

I left it for about 6 hours and it didn’t finish

zpool import -f -F -n

It doesn’t show any return

zpool import -f -F -R /mnt

image

I tried to perform the import through the truenas UI is:

Won’t comment on your errors other than rebuild it after checking all of the drives, but, please note that not only is a stripe with no parity not recommended, but neither are USB drives. There have been many who say USB will work, until it doesn’t and that describes the issue. They can cause all sorts of issues and seem to run for a while and then, not.

3 Likes
  1. The problem with USB connections is that they can often drop for no reason and ZFS then considers them offline. (My personal experience is that USB3 seems considerably less reliable than USB2, and possibly some USB drivers are less reliable than others.)

  2. The problem with stripes is that there is zero redundancy and any single drive going offline doesn’t only lose you access to the data on that drive, but also takes the whole pool offline.

So in this case you have a large pool of 25TB with zero redundancy and 2 USB drives which are quite likely to go offline at any time.

IMO calling this “not a recommended configuration” is a massive understatement.

My advice…

Ditch the existing striped pool, consider shucking the USB drives to switch them to SATA (if you have enough SATA ports), and then take same (or similar) sized drives and create RAIDZ1 or MIRROR vDevs and either create a new pool if RAIDZ1 or add them to the existing mirror pool if not.

  • Shucked: Existing 2x3TB mirror pool with 3TB usable, plus new RAIDZ1 pool consisting of 3x 7TB/6TB drives (12TB useable) and 3x 3TB/1TB drives (2TB useable) - total 17TB useable.

  • No shucked - Existing mirror pool plus extra vDevs of 2x7TB and 3TB/1TB giving a total of 11TB useable space.

3 Likes

P.S. Check whether any of these drives are SMR drives and don’t use them in any redundant cases.

If you want to use SMR drives in a non-redundant pool, then create a separate pool for each drive in order to limit the loss if the drive dies.

1 Like

Thanks for the answer, after this problem I realize that I could have done it that way, but when I set up the server I bought the disks little by little and there was no possibility of doing a RAIDZ1, but if I can’t solve this problem that’s what I’m going to do. I intend to do a RAIDZ1 3x3Tb a 2x7Tb mirror until I acquire a new 7TB disk to do a RAIDZ1.

Thanks for the reply, I’m aware that USB drives are not recommended, but at the moment it’s what I have to optimize my space, in the future I intend to remove them from my system, but for now I’m using them where the data is not that important to me.

I use a PCI Express to SATA card on my server to increase its capacity. Is there any problem in using this type of card with TrueNAS?

This pool that I’m having problems with, it used to happen a lot that random disks would degrade.

…Depends on the card. If you have an HBA flashed to IT mode? It is perfect! If you have a random port multiplier then you’re very likely to eventually have a terrible experience.

Mind giving details on the card in question?

This is my pci card.

https://pt.aliexpress.com/item/1005003596646635.html?spm=a2g0o.order_list.order_list_main.395.21efcaa4xoyive&gatewayAdapt=glo2bra

1 Like

That is a problem - you can read more here:

That link gives a thorough explanation on why you’d want to avoid something like the pcie card you quoted & give examples for things that are nearly the same price but would work without risking your data.

1 Like

Thanks for reading, it was very helpful, I will replace my board as soon as possible, thanks.

2 Likes

Try to either grab one that is already flashed to IT mode or try to remember to flash it yourself once you get a proper HBA. If you need help checking/flashing feel free to reach out on the forums :slight_smile:

1 Like

I purchased this one, vertical direction is better for my server.

https://pt.aliexpress.com/item/1005005481239832.html?spm=a2g0o.order_list.order_list_main.5.1a01caa42N0X49&gatewayAdapt=glo2bra

Is data recovery possible?

I suppose anything is possible, but, didn’t you say “but for now I’m using them where the data is not that important to me” and “I have no problem losing the data”. If it is not that important, you are spending a lot of time on this? I suspect the problem could reoccur though until you fix the underlying problem.

There are services out there, typically very expensive.

1 Like

Yikes… 20 sata ports for a single PCIe lane.

1 Like