Complete pool failure on scale reboot?

New to truenas, have been using a freenas 11.3 server for years…
Testing out scale: ElectricEel-24.10.0. It’s a test server so no loss other than my time. I had completed setting up the system with two pools, one using mirrored ssd 1tb disks, the other using 6 10tb spinning disks in raid3. All was working well for two days.
I setup the mirrored pool as system data, app settings, logs, etc, and the raid3 pool had 8 vdevs for data storage. I use only nfs as I’m all linux here, no need for smb shares.
I installed two apps, audiobookshelf and plex, again all went fine (not pleased that I can’t set separate IP’s for the docker apps, but they worked fine.
Lastly I installed the nginx proxy app which installed and configured fine but never finished the startup/deploy stage. After fiddling with that, I shut it down and decided to see if a system reboot would help anything.
Mind you, I’ve restarted the system several times since installing it with no problems.
Now, I have no pools on either disk set. Well, I have the pools but I can’t do anything with them. Both pools say " Pool contains OFFLINE Data VDEVs"
And that’s pretty much where I’m at. Can’t find anything to alter the ‘offline’ state other than deleting them. It’s just test data, but I’d rather not loose the app configurations in the smaller, mirror pool. Any thoughts?


Update: after a couple hours trying to mount the zfs spools by hand, I determined that the ’ / ’ directory was mounted as read only. I re-mounted (working in the CLI) and was able to get both pools to mount correctly. All data seems intact. However, after a 2nd reboot, it came back up as RO again. Can’t find any reason for the bad attitude it’s giving me, best course seems to just run thru the install again and see if I can duplicate the experience…

1 Like

Hi, I’ve got the same problem, how exactly are you fixing it now?

I used: sudo zpool import to see if there were any pools I could import. I got no response to that. Then I tried: sudo zpool import -d /dev/disk/by-id
which did show me the pools I was after, but I could not import them.

Check and see if your root mount is read only: mount | grep ’ / ’
If so, you could try: sudo mount -o remount,rw ’ / ’

PLEASE; I have a test system I’m fumbling around with here. It’s not going to ruin my day if I can’t bring back those vdevs. I would not want to advise anyone to tool around on the CLI of a live system like this. That said, I was able to recover the vdevs, but every time I reboot, the system reverts to ro on the root mount and breaks the pools again. My final solution was to re-install and see if I learn what I did that caused the issue.

I would add that if you have data you care about, don’t over write it while trying to recover. ZFS will not barf on your data, you could always drop the disks into another system and recover the files!