realAXL
December 22, 2025, 10:58am
1
I constantly see core dump messages in the root shell. This happened after the replacement of one of two disks in a mirror storage pool.
Here’s the details:
OS Version:TrueNAS-SCALE-24.10.2.4
Product:N100DC-ITX
Model:Intel(R) N100
Memory:31 GiB
My storage pool had 2 disks with identical size (476.94 GiB). I replaced the former second disk sdb in the pool with the larger one (1.82 TiB):
sda / ONLINE / 476.94 GiB / No errors
sdb / ONLINE / 1.82 TiB / No errors (replaced)
There were no errors reported in the “resilvering” of the pool.
Since the rebooting, I get the folloging messages in the root shell (ssh, zsh):
2025 Dec 22 11:48:50 truenas Process 88242 (node) of user 0 dumped core.
Module /usr/local/bin/node without build-id.
Module /usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node without build-id.
Module /usr/src/app/node_modules/.pnpm/bcrypt@5.1.0/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node without build-id.
Module /usr/src/app/node_modules/.pnpm/bcrypt@5.1.0/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node
Module /usr/src/app/node_modules/.pnpm/argon2@0.30.3/node_modules/argon2/lib/binding/napi-v3/argon2.node without build-id.
Stack trace of thread 64:
#0 0x0000563f67833428 n/a (/usr/local/bin/node + 0x1863428)
#1 0x0000563f6785f968 n/a (/usr/local/bin/node + 0x188f968)
#2 0x0000563f677b43cd n/a (/usr/local/bin/node + 0x17e43cd)
#3 0x0000563f677b4d53 n/a (/usr/local/bin/node + 0x17e4d53)
#4 0x0000563f677b5036 n/a (/usr/local/bin/node + 0x17e5036)
#5 0x0000563f678cc7b1 n/a (/usr/local/bin/node + 0x18fc7b1)
#6 0x0000563f6770a0e9 n/a (/usr/local/bin/node + 0x173a0e9)
#7 0x0000563f6770ac01 n/a (/usr/local/bin/node + 0x173ac01)
#8 0x0000563f6770af2b n/a (/usr/local/bin/node + 0x173af2b)
#9 0x0000563f6770a31a n/a (/usr/local/bin/node + 0x173a31a)
#10 0x0000563f6770ac01 n/a (/usr/local/bin/node + 0x173ac01)
#11 0x0000563f6770af2b n/a (/usr/local/bin/node + 0x173af2b)
#12 0x0000563f6770a31a n/a (/usr/local/bin/node + 0x173a31a)
#13 0x0000563f6770b47f n/a (/usr/local/bin/node + 0x173b47f)
#14 0x0000563f67847c21 n/a (/usr/local/bin/node + 0x1877c21)
#15 0x0000563f6789b651 n/a (/usr/local/bin/node + 0x18cb651)
#16 0x0000563f6789baec n/a (/usr/local/bin/node + 0x18cbaec)
#17 0x0000563f678ba06f n/a (/usr/local/bin/node + 0x18ea06f)
#18 0x00007f060fab83d7 n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x4a83d7)
#19 0x00007f060fe451d3 n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x8351d3)
#20 0x00007f060fe411cc n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x8311cc)
ELF object binary architecture: AMD x86-64
Notes:
there is no /usr/local/bin/node file or link: ls: cannot access ‘/usr/local/bin/node’: No such file or directory
I did not see any other issues, the WebUI and SMB share access seem to work as before.
realAXL
December 22, 2025, 1:47pm
2
Note:
The process that lateron crashes is visible in ps -ef before the core dump:
root 50829 50806 99 14:45 ? 00:00:02 node dist/main
Does this hardware pass an overnight run of memtest86?
1 Like
realAXL
December 23, 2025, 7:51am
4
Never tested this, but the system ran for more than one year under load without issues, so I do not see a connection between the very specific core dump of node and the hardware. There are in fact no other issues reported right now. So, is this more of a generic recommendation questioning N100 hardware?
realAXL
December 23, 2025, 8:02am
5
Maybe that helps investigating: Could not parse number of program headers from core file: invalid `Elf’ handle
I found this message in journalctl --since “1 min ago” just when another core dump hit. Attaching the stripped output:
Dec 23 08:58:11 truenas systemd[1]: Started systemd-coredump@2076-724946-0.service - Process Core Dump (PID 724946/UID 0).
Dec 23 08:58:11 truenas systemd-coredump[724947]: Removed old coredump core.node.0.a4a740f1b46548e3814ba756dedc1010.719043.1766476435000000.zst.
Dec 23 08:58:12 truenas (sd-parse-elf)[724981]: Could not parse number of program headers from core file: invalid `Elf' handle
Dec 23 08:58:12 truenas systemd-coredump[724947]: [🡕] Process 724906 (node) of user 0 dumped core.
Module /usr/local/bin/node without build-id.
Module /usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node without build-id.
Module /usr/src/app/node_modules/.pnpm/bcrypt@5.1.0/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node without build-id.
Module /usr/src/app/node_modules/.pnpm/bcrypt@5.1.0/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node
Module /usr/src/app/node_modules/.pnpm/argon2@0.30.3/node_modules/argon2/lib/binding/napi-v3/argon2.node without build-id.
Stack trace of thread 65:
#0 0x000055eee940a428 n/a (/usr/local/bin/node + 0x1863428)
#1 0x000055eee9436968 n/a (/usr/local/bin/node + 0x188f968)
[....]
#16 0x000055eee9472aec n/a (/usr/local/bin/node + 0x18cbaec)
#17 0x000055eee949106f n/a (/usr/local/bin/node + 0x18ea06f)
#18 0x00007f38a57293d7 n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x4a83d7)
#19 0x00007f38a5ab61d3 n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x8351d3)
#20 0x00007f38a5ab21cc n/a (/usr/src/app/node_modules/.pnpm/@prisma+client@4.16.2_prisma@4.16.2/node_modules/.prisma/client/libquery_engine-linux-musl-openssl-3.0.x.so.node + 0x8311cc)
ELF object binary architecture: AMD x86-64
Dec 23 08:58:12 truenas systemd[1]: systemd-coredump@2076-724946-0.service: Deactivated successfully.
Dec 23 08:58:12 truenas systemd[1]: Started systemd-coredump@2077-724986-0.service - Process Core Dump (PID 724986/UID 0).
Dec 23 08:58:12 truenas systemd-coredump[724987]: Removed old coredump core.node.0.a4a740f1b46548e3814ba756dedc1010.719023.1766476436000000.zst.
Dec 23 08:58:13 truenas systemd-coredump[724987]: [🡕] Process 724851 (node) of user 0 dumped core.
Module /usr/local/bin/node without build-id.
Stack trace of thread 19:
#0 0x00007fbc05c145c0 n/a (/lib/ld-musl-x86_64.so.1 + 0x4d5c0)
#1 0x000055c373905e8b n/a (/usr/local/bin/node + 0x9e5e8b)
#2 0x000055c373ae7088 n/a (/usr/local/bin/node + 0xbc7088)
[...]
It’s a recommendation because it takes so little of your active time to set up, runs by itself, and helps identify a serious fault that is notorious for causing all kinds of weirdness. Since the N100 lacks ECC, it’s less likely to warn you specifically that your RAM is faulty.
Comparatively, you could spend hours diligently going through logs, searching the web, talking to chat robots (for emotional support) and so on and so forth and get potentially nowhere.
My view is that I rather rule out RAM issues from the get-go.
realAXL
December 23, 2025, 5:13pm
7
Another observation from recent tests:
downloaded the configuration
re-installed a blank Scale 25.10 ISO
no core dumps observed over 1 hour
imported the configuration
core dumps again
So I guess there’s some weird config that causes node to crash:
Dec 23 08:58:11 truenas systemd[1]: Started systemd-coredump@2076-724946-0.service - Process Core Dump (PID 724946/UID 0).
Dec 23 08:58:11 truenas systemd-coredump[724947]: Removed old coredump core.node.0.a4a740f1b46548e3814ba756dedc1010.719043.1766476435000000.zst.
Dec 23 08:58:12 truenas (sd-parse-elf)[724981]: Could not parse number of program headers from core file: invalid `Elf' handle
Dec 23 08:58:12 truenas systemd-coredump[724947]: [🡕] Process 724906 (node) of user 0 dumped core.
realAXL
December 24, 2025, 2:06pm
8
This issue seems to be solved, no more core dumps observed since I did the following:
some random ps -ef and docker ps showed that one of by Dockge Stacks seems to run as a zombie process, starting node over and over, again.
I stopped the container: hoppscotch-backend
Hopefully this solved the issue. Thanks for the support.