Data recovery of pool and questions about misconception TrueNAS SCALE

Greetings community,

Disclaimer: I’m new to this community, I’m new to TrueNAS, new to ZFS. Basically I’m just out of my depth (or exploring my horizons?). I tried to be as thorough as possible in providing all data I believed necessary. Please forgive my ignorance if I have missed something critical or ask stupid questions.
I’ll also add that I did try and look for answers to the below on both the old and new TrueNAS forums, reddit and stack overflow - the complexity of ZFS and possibly my (pooly architected?) config may be to blame.

Background about me:
IT professional, self taught, mostly worked within Microsoft suite and software development for the past 10 years. Side hobby in hardware, but no experience in enterprise setting. Build my first server in January 2024 and after a lot of research ended up with TrueNAS Scale and ZFS.

Background about the issue:
Used the server friday evening for media playback, had a momentary “outage” but it recovered in a minute or so and I didnt get up to investigate.
Couple of hours later I go to check, everything looks fine.

Suddenly I’m disconnected from all hosted webapps in TrueNAS, through proxmox console I see repeated restarts for 3-4 minutes. Once it settled down and booted my storage pool was deprecated. It said I had 2 unassigned disks. In my stupidity I tried adding these back to the pool which I believe may have wiped them.
I followed up with trying to export the pool which then removed it from the interface and now import pool says “No options”

For the next hours I scour the internet for a solution. I have no idea why this happened.

Hardware:
Mostly consumer hardware.
ASUS Z270-a motherboard, Intel i7-7700 underclocked (6 core assigned to TrueNAS)
32GB DDR4-2400 (24GB allocated to TrueNAS)
1TB NVMe (100GB allocated to TrueNAS)
LSI 9300-i16 IT mode HBA
16x 1TB ADATA TLC SATA 2.5" SSDs

About my config:
TrueNAS SCALE virtualized on Proxmox
1 Pool, 4 vdevs of 4 disks in z1

About my data:
No backups
Nothing critical, mostly replaceable media.
I can afford to lose everything but it’s just a lot of work to restore (like a lot of work)

About my assumptions:
I realize ZFS even in any Z stripe is redundancy - NOT backup, I however believed that if two disks in a z1 vdev were losts this would only affect the vdev and that the pool would continue to be usable with only data on the vdev being lost.

Current situation:

20f7dea25d24f0d837acd65425ea3395fd6da339

Additional information:
The time of the restart incident was around the time I have 2 rsync tasks running. I have had these running for a few weeks without incident but I cannot help suspect the timing. It’s possible that combined with a noob-underclock of the CPU triggered the issue
I was unsure about leaving “Scrub tasks” due to it being flash storage, however opted to just leave it ON as per default.

Desired outcome:

Main Goal:

  • A final conclusion as to whether/if recovery of data on remaining vdevs is realistic (and any assistance you can provide if it is).

Secondary:

  1. Any validation or correction to my assumptions (see above)
  2. Some guidance on any changes I should make in the future of the pool - if any, for my current 16 disk pool
  3. Some guidance on leveragining my 16 sata SSDs for ZIL/SLOG/L2ARC. I’m currently building a DAS/JBOD for the server (24-bay) where I plan on buying 4x12TB WD Red Plus drives to start out. I want to potentially keep a flash pool of 4-10TB for VMs and other projects and leverage the rest for speeding up the attached JBOD HDDs. Should I consider Hot spares?
  4. To learn something about data management and restoration on Linux/ZFS

There is a limit on screenshots so adding a few more here

That’s not how ZFS works: with a few exceptions, losing a VDEV means losing the entire pool.

Data recovery on ZFS is an order of magnitude more difficult compared to other file systems and, as such, very expensive. As far as I am aware, you can only look at Klennet ZFS Recovery.

Your best bet here is to bring the failed VDEV back online, if possible: this means resurrecting one of the two drives from the dead; if you have wiped them however there is little we can do: can you please tell us step by step what you did?

I suggest you reading the following resources in order to increase your understanding of ZFS:

Now, about the specific issue… I always reccomend running https://memtest.org/; a memory error resulting in kernel panic or just a straight crash is a suspect.

Generally, we advise against running overlocks or underclocks for stability reasons, but it’s not a ZFS-specific reccomendation.

About reccomendations for the current pool, without knowing your use case there is nothing much we can say about the pool’s geometry… as far as we understand your data loss is a result of your own actions; generally, unlike with HDDs a 4-wide RAIDZ1 SSD VDEV is considered safe. You could post sas3flash -list to check if something is wrong with the HBA, but I think we won’t find anything there.

If you want to use SSDs to speed up your HDD pool, you want to use a sVDEV[1], specifically a metadata VDEV: if you search either forum you will find plenty of material (alongside fusion pools) but to make a quick recap:

  • if you lose the metadata VDEV you lose the entire pool, so you want the same level of redundancy
  • it can drastically improve your HDD pool performanceduring traverse operations, especially useful with macOS
  • it can be used to store data which has a block size smaller than an amount you set, allowing your drives to not hit the IOPS celing

Generally, you want to explicate your use case in order to receive better help. For VMs and virtualization in general (both of TN’s and other VM) there are a few resources I linked to above that you should have read before building your current system: especially one is critical knowledge you must know if you virtualize TN itself as you are doing.

SLOG is needed with syncwrites if you have an HDD pool.
L2ARC is a read cache that requires TN to have at least 64GB of RAM at its disposal.

Oh, and if you aren’t already using it consider an UPS for your system.


  1. special VDEV ↩︎

1 Like

First off, thank you so much for taking the time Davvo

Data recovery on ZFS is an order of magnitude more difficult compared to other file systems and, as such, very expensive. As far as I am aware, you can only look at Klennet ZFS Recovery.

I actually found and ran this overnight. It returned 0 files for one of the two UNAVAILABLE disks. I believe this confirms that by attempting add them back to the pool they were wiped.

It took a long time to run. I’ll try second disk overnight.

can you please tell us step by step what you did?

I cannot, unfortunately. Late night, panic. I don’t recall any specifics but I’ll try and give a short summary from memory.

  1. 14 disk down, 2 unassigned
  2. selected add to pool
  3. selected my pool added the disk pressed create
  4. it returned some sort of error
  5. pressed export pool
  6. pool gone

I suggest you reading the following resources in order to increase your understanding of ZFS:

Appreciated, I’ll try and go through these

I always reccomend running https://memtest.org/; a memory error resulting in kernel panic or just a straight crash is a suspect.

I’ll run this ASAP

About reccomendations for the current pool, without knowing your use case there is nothing much we can say about the pool’s geometry

Media consumption, homelab, data storage

generally, unlike with HDDs a 4-wide RAIDZ1 SSD VDEV is considered safe

I’ll take that as a don’t do that for HDDs. Should i consider narrower vdevs or higher stripe?

You could post sas3flash -list to check if something is wrong with the HBA, but I think we won’t find anything there.

image

This is crucial. The drives were unassigned instead of faulty/offline, correct?

More parity (at least RADIZ2).

Correct unassigned as per the storage or dataset dashboard. I can’t recall which at this time

About the HBA, it should be flashed to 16.00.12.00 or newer. LSI 9300-xx Firmware Update | TrueNAS Community

Wouldn’t bet on my data that’s the issue though.

More parity (at least RADIZ2).

So 4-wide vdev z2 for HDDs - understood. buying and adding in groups of 4 is more economically viable for me. If it costs me capacity then so be it.

About the HBA, it should be flashed to 16.00.12.00 or newer. LSI 9300-xx Firmware Update | TrueNAS Community

Amazing! Added to my todo. Tyvm

If you plan to have more than a single VDEV I would suggest saving and building the VDEV with at least 6-8[1] drives in RAIDZ2 (or 3 depending on your needs). Having two or three 4-wide RADZ2 VDEVs is expensive and inefficent.

Take your time designing your layout, and ask if you have questions.


Does zpool import show you anything?


  1. the reccomended maximum width of a single VDEV is 12 drives. ↩︎

Well…

If you’ve lost two disks from a RaidZ1 you’ve lost your pool, and it’s time to restore from backup.

(Which I know you don’t have)

Did you try turning everything off and back on again before panicking and re-adding drives?

:disappointed_relieved:

1 Like

Did you try turning everything off and back on again before panicking and re-adding drives?

Nope! Appreciate you stopping by the memorial service for my dataset though! :pray:

2 Likes

Generally, we advise against running overlocks or underclocks for stability reasons, but it’s not a ZFS-specific reccomendation.

Changed back to auto. RIP electrical bill lol. Anything for the data <3

I always reccomend running https://memtest.org/; a memory error resulting in kernel panic or just a straight crash is a suspect.

It passed! Phew.

Data recovery on ZFS is an order of magnitude more difficult compared to other file systems and, as such, very expensive. As far as I am aware, you can only look at Klennet ZFS Recovery.

Disk 2

About the HBA, it should be flashed to 16.00.12.00 or newer. [LSI 9300-xx Firmware Update | TrueNAS Community]

Thanks! :pray:
image

If you plan to have more than a single VDEV I would suggest saving and building the VDEV with at least 6-8 drives in RAIDZ2 (or 3 depending on your needs). Having two or three 4-wide RADZ2 VDEVs is expensive and inefficent.

Take your time designing your layout, and ask if you have questions.

  1. the reccomended maximum width of a single VDEV is 12 drives. [:leftwards_arrow_with_hook:︎]

I think I’m sold on 8 wide z2, breaking on 6 on a 4x6 shelf my OCD would demand I stacked disks vertically instead of horizontally :rofl:
Also the more capacity ofc.


Does zpool import show you anything?


I posted a screenshot of zpool import output in my initial post.
Here is a new ss

OOOH, a fellow OCD member. Welcome!

Sorry, I was reading that as a zpool status.

I generally use 8 wide (10wide max and 6 wide minimum) z2 vdevs and use however many I need for the amount of drives I have for the system. This is for general purpose storage and file serving. If I have 16 drives then 2 x 8 wide z2 vdevs. This generally works out as a good average for space, cost, reliability and has worked well over the years.

3 Likes