AHCI mode looks familiar honestly - and i guess i can wait
i appreciate the quick response!
I’ll leave it up to you.
I have a good feeling it should work.
ZFS pool metadata shouldn’t spontaneously disappear from a drive/partition.
Would the bios mode affect anything else? Or it could be unrelated since trueNAS can read and detect the boot-pool
I was also unable to read the pool data when I took one hard drive out and tried to read the data on Ubuntu on my other computer which I would assume is in a good sata state
By “unable to read” you mean that the sudo zdb -l
returned the same results of “failed to unpack label” four times?
BIOS time might impact something as well, if ZFS thinks that the labels are all newer than they should be it might refuse to open them, similar to how an incorrect system time will break SSL.
Did you set SATA mode to AHCI in the BIOS on the TrueNAS server itself?
@HoneyBadger would you think it’s safe to try this out if it hasn’t been already?
Correct, labels couldn’t be unpacked with the ZDB command on my other PC. I plugged the hard drive into it using a USB dock on Ubuntu booted off a USB
I have not yet
I need to clarify -
When I said “recently my CMOS battery died”
it happened a few days before this situation.
- System time was reset to some time in 2017
- NAS was still operational minus some reporting tools were not working
- I reset the system time to be correct
- NAS continued to work ( at least for the little bit i used it - which was copying some files over the network to my android tablet)
- Hopped onto PC - was using media encoder to process some files - then the NAS pc turned off and restarted when i started encoding
- Finally, the current situation
If there are no other suggestions - I will change the SATA mode later and see if that helps the situation.
Change SATA mode
Start up NAS
Check to see if drives are connected to Pool
Rebooted the NAS with the SATA mode changed to AHCI
Truenas still doesnt show the drives in the pool.
Here’s what i tested, and it’s showing similar behaviour as before. Let me know how i should further test this, or if we’ve discovered something here
root@truenas[~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 260M 0 part
├─sda2 8:2 0 103G 0 part
└─sda3 8:3 0 16G 0 part
└─sda3 253:0 0 16G 0 crypt [SWAP]
sdb 8:16 0 10.9T 0 disk
├─sdb1 8:17 0 2G 0 part
└─sdb2 8:18 0 10.9T 0 part
sdc 8:32 0 10.9T 0 disk
├─sdc1 8:33 0 2G 0 part
└─sdc2 8:34 0 10.9T 0 part
root@truenas[~]# sudo zdb -l /dev/sdb2
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# sudo zdb -l /dev/sdc2
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# sudo blkid --probe /dev/sdb2
/dev/sdb2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_UUID="df6b7731-6a5d-11ef-85b2-d43d7e5548d5" PART_ENTRY_TYPE="516e7cba-6ecf-11d6-8ff8-00022d09712b" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="4194432" PART_ENTRY_SIZE="23433576280" PART_ENTRY_DISK="8:16"
root@truenas[~]# sudo blkid --probe /dev/sdc2
/dev/sdc2: PART_ENTRY_SCHEME="gpt" PART_ENTRY_UUID="df5a79cd-6a5d-11ef-85b2-d43d7e5548d5" PART_ENTRY_TYPE="516e7cba-6ecf-11d6-8ff8-00022d09712b" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="4194432" PART_ENTRY_SIZE="23433576280" PART_ENTRY_DISK="8:32"
I think i found a truenas “.db” file that’s relatively recent - i just dont remember if it was before the crash any harm in using this?
edit: its from the day before the crash - will a config file help? or do i need snapshots?
Config files or snapshots can’t help you. If there’s no pool to import, then datasets and snapshots are moot. (They are part of the pool.)
The config file might be able to show you what devices (partitions) the vdev is the pool was constructed with.
EDIT: It probably doesn’t. It’s on the pool metadata that this information is kept… which isn’t helpful in your case.
darn, I was really hoping that i got lucky with some sort of information that i saved from before this incident
ZFS pools are not (ideally) tied to any server.
By all means, you should be able to take all the devices in a pool and import it into any other server with a recent enough version of OpenZFS.
That’s why this is so perplexing.
There’s no indication of your pool’s metadata on the partitions that supposedly comprise the one and only mirror vdev.
The second partition of each disk is a “freebsd_zfs” partition type, but that’s it. It’s almost like having an “ext2” partition type in your partition table, but without having any ext filesystem formatted on it.
I continued to do some searching online - should i start looking at something called “Photorec” to recover data?
Nothing makes sense about this.
Even if your server’s ZFS version was “too old”, it would still reveal that there is an importable pool available. The zdb
tool would also be able to find the label on the drive’s second partition.
If this is something to do with SATA mode set to IDE/AHCI, I don’t see why it would have no issues reading the partition table on the drive, but oddly not be able to read any deeper than that? The sectors are 1’s and 0’s the drive, even for the partition table. There’s nothing magical about them. If you can read the partition info, then technically it means you’re able to access the drive as a block device.
The same thing with SED. If your drive was “locked”, you shouldn’t even be able to read the partition info.
I’m very confused.
Photorec (part of the “Testdisk suite”) and “Klennet”, are last ditch efforts to retrieve data on the drives.
I don’t think you’ll be pleased with either’s results, unless you REALLY need to scrape what you can.
You’re very likely only able to recover files that are saved in sequential unfragmented blocks. Of those, you’re very likely to lose the file’s metadata, such as filenames, folder paths, and modification times.
You might end up with a ton of files that look like this, all without folder information or modification times:
00001.JPG
00002.JPG
00003.JPG
00004.JPG
00001.MP4
00005.JPG
00002.MP4
Since you used a “mirror” vdev, Klennet might be able to recover the necessary ZFS metadata, and thus, filesystem and folder structure? I don’t know though, sorry.
im remembering one more thing - i think i had to edit my booting device. Would the Boot device be a probably cause of this issue?
On its own? No.
Unless you mean that you tried to incorporate a partition on your boot drive to be used for a “SLOG” or “L2ARC” or “Special VDEV” for your data pool…
I didnt do anything fancy - i do recall at some point that im booting it was no longer off the SSD or something - had to put UEFI above SSD in boot order