Dragonfish 24.04.1.1 in a VM, hangs on boot, can stall on shutdown

I’ve been testing this for a while today…

I have a test TrueNAS VM hosted on TrueNAS Cobia 23.10.2.

The VM has two zvols attached as disks. Each with VirtIO.

The first is a boot device, 32GB. The other is 128GB with the pool (no redundancy).

The boot device has Dragonfish 24.04.0 installed. And Dragonfish 24.04.1.1 (done as an online upgrade from 24.04.0)

There is a single nic with virtio connected to a bridge.

If I boot into 24.04.0, then I can boot with 8GB of ram assigned.

If I boot into 24.04.1.1, then I need 32GB of ram assigned to the VM to sometimes successfully boot.

Is anyone else seeing these results?

EDIT: I now have a reproduction video, and a bug report for this… can anyone else reproduce?

Dragonfish 24.04.1.1 in a VM, hangs on boot, can stall on shutdown - #38 by Stux

2 Likes

I have a Proxmox VM with Dragonfish-24.04.1 and it boot with 16GB ram assigned.
Boot disk is 64GB VirtIO.
Sata Controller and network device are PCI devices passed to the VM.
Not yet upgraded to 24.04.1.1 version.
Best Regards,
Antonio

1 Like

Define: “will not boot”.

Errors? Messages? Screenshots?
Log files (boot to .0 and check logs made during .1.1 failed boot).

Very few people have Truenas as a VM, but should not matter:

I have several systems running 24.04.1.1 with 8, 10, 12 GB RAM without issues.

Of course those systems do now have VMs. They may have a small app container or jailmaker (very little memory used) but no problems.

1 Like

I upgraded to Dragonfish 24.04.1.1 from Cobia,
on Proxmox with 8 Gb RAM assigned
it boots and runs fine

Don’t try this at home :stuck_out_tongue:

(I have a TN Core system used as a NAS, but I feel that I will have to go to Scale sooner or later even if it is just a NAS,
so I have a TN Scale VM for getting used to it on a Proxmox mini PC that runs all my apps and services)

1 Like

My bad.

Hangs mid boot. Ie at a “random” point during the kernel boot messages.

I suspect this is somehow the issue, the TrueNAS VM, but it must be the host, right?

I have Dragonfish 24.04.1.1 on ESXi with 16GB RAM, no issues at all.

“Hangs mid boot. Ie at a “random” point during the kernel boot messages” is STILL not good enough!

“My system doesn’t work. It just stops during boot! Help!”.

If you don’t want to take 5 minutes to collect/write down the EXACT boot messages/errors:

Can you expect somebody ELSE to spend 5 minutes on your lazy, incomplete, report/issue?

I can’t magic the answer for you right now.

I can take screenshots of the hang. How many would you like? But not right now.

As a test I left the system hung last night, plan to see if it stays that way over many hours.

Would have to be surely.

My dragonfish system only has 32GB of RAM…

I’m going to try a “manual install” later.

Hope it works.

1 Like

Has been hung at this point for the last 9 hours or so.

Rebooting… we’ll see where it lands this time.

EDIT: it booted, but didn’t have the pool available.

EDIT: tried again… with 32GB of ram… this time it hung again…

Edits because posting a bunch of black screenshots is a sure way to derail this thread.

It booted… but didn’t find the data pool…

Part of what takes so long testing this (and I’ve tried a lot of things so far) is that it takes it a long time to shutdown…

I managed to screen cap it this time.

Notice the “FAILED” messages occur after a 90s timeout.

Just tested another Dragonfish 24.04.0 VM on a DragonFish 24.04.1.1 system.

Same timeout at shutdown.

My question was: “What’s running in /usr that’s causing the hang?”
Googled and I found this:

HTH

1 Like

I just tried an online update on that system… which was working fine…

now its hanging too.

EDIT: power off, reselect 24.04.0 and it boots right up.

Yeah, I think I can’t really fix that… and I think that’s another can of worms… and I’ve got so many open right now!

Oh sorry, I tried.

But in my defence from what I gleaned, systemd’s journald is causing the problem on shutdown and looks like it can be “solved” (aka: workaround) by editing /etc/systemd/journald.conf to change the Storage= line to Storage=volatile. But, if you cant edit that file or I’m just way out of my league (I don’t know why you’d be locked out of editing that file), I apologize.

However, what I find scary is the conversation in that solution.

Seems like a common bug. Is it still shutting down successfully afterwards? –
user1 Jul 15, 2017 at 15:30

It does appear to shutdown after the error is displayed. –
user2 Jul 15, 2017 at 16:44

When I googled it I found a few Ubuntu and Kali users experiencing the same issue and many people were chiming in that they saw the same thing but it didn’t seem to affect anything. –
user1 Jul 15, 2017 at 21:59

That is to say: I thought those people’s acceptance of that error scary.

1 Like

TrueNAS typically blows away any custom configuration in /etc when the next update comes round, and for certain services, everytime they are configured in the GUI.

I just tested the journald config, and it does persist across reboots at least… but that doesn’t resolve the issue…

And if it did, I think the fix may be worse than the issue, as you would lose the persitance of journald logs.

Agreed: no logs would suck.
Shoot! I thought journald was the issue. …systemd is such a fickle beast! You’d think with actual professional developers you’d have something constant; but I’ve seen some of the exchanges between Linus and Pottering so I guess, not that surprised.

Sorry, I couldn’t help. Good luck.

1 Like

That’s still a good question though :wink: