Detected Hardware Unit Hang Crashing TrueNAS

Ever since updating to Fangtooth and setting up my ethernet bridge I’ve had this weird error that happens a couple times a week that leaves my server in an unusable state.
I keep having this error happen.
“eno1: Detected Hardware Unit Hang”

I’ve tried using this but it doesn’t seem to fix the solution, only delay the time it takes for my server to crash. I configured it as an init script

This topic from the Proxmox forum is pretty much the only other location I’ve found where people are actually discussing the issue.

Has anyone else experienced this?

1 Like

What hardware is this running on, and it is running bare metal or virtualised?

Bare metal, and it’s running on a GA-X99-UD4.
I didn’t have this issue before Fangtooth and I’ve been forgetting to post about it as it usually happens when I’m out of the house and have to remotely restart using my KVM.

I wasn’t able to find a specification for your mobo’s NIC. Can it be Intel I219? You can check it with lspci | grep -E -i --color 'network|ethernet'.

UPDATE: I didn’t pay enough attention – seems like you have e1000.

I also had this issue in proxmox and it was resolved by pinning to the older kernel version. I don’t know whether it’s doable in truenas.

Dang. Hopefully we get an answer soon!

00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-V (rev 05)

Does anybody else have any experience with this issue? My server has started crashing at least once a day now and it’s very frustrating for my family

I’m sorry to say, but if you had a hardware failure causing system crash, then options now are to try & experiment and see if replacing parts.

If it is your ethernet adapter and it isn’t built-it, then I’d slap a different one into a PCIe slot and test. If it is built-in to the motherboard, I’d try to disable it in BIOS and slap a different card into a PCIE slot.

I’m not sure what else to recommend other than maybe rolling back to a previous version if it was stable.

something else to keep in mind is that motherboard Ethernet on non-server boards is notorious for improperly being cooled (ie, not at all).

yes, even modern gigabit chipsets w/ all their cost reduction can overheat with sustained workloads. though it only got worse in the 2.5Gbe generation where even moderate workloads would overheat and crash NICs.