Endless checksum errors between multiple drives

Big news: The Problem is on the drive.
I should’ve think of this sooner, just connect the drives including the OS to my main rig.
It fails to connect to the net, but go into shell and do sudo zpool status, and there it is, same 2 drives piling up checksum errors.
Literally everything except the drives are swapped, and the issue persists. Turns out the first thing got ruled out was the answer.

So, what gives?
If I have to guess, it’s probably the initial insufficient power caused something.
One thing I notice this time: The errors are increasing by 4 about every 2 seconds, and it goes as long as the system is running. I ran scrub, and woke up with 50k of errors.
That probably means it’s just the same thing causing errors over and over again, not really something spreading. What could be it?
Looking back at the zpool status dump:

errors: Permanent errors have been detected in the following files:

        /var/db/system/netdata/context-meta.db-wal
        /var/db/system/netdata/netdata-meta.db
        /var/db/system/netdata/netdata-meta.db-wal
        /var/db/system/netdata/dbengine/journalfile-1-0000000057.njf
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.69260.1762103223000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.71272.1762103613000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.74868.1762104259000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.67888.1762102978000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.69885.1762103368000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.71963.1762103742000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.64957.1762102445000000.zst
        /var/db/system/cores/core.netdata.999.71c07f17d3fd46deb40c2629586f2aa5.73343.1762104014000000.zst

Nope, no idea what it means, but judging from the directory, my data is probably fine.
So…this was on the post for a day at this point, and no one pointed this out…I’ll just throw it to Gemini.

The files listed are primarily related to Netdata, which is the system monitoring and performance analytics tool used by TrueNAS, along with several core dump files (crash reports) for the Netdata process.
That…makes a ton of sense, this perfectly explained why system monitoring breaks whenever the pool is loaded. Everytime it try to update system status, errors are thrown, therefore consistently piling up.
Gemini then suggest service netdata stop, which does stop the errors from piling up.
It then claim the data are non-critical and safe to remove, and I can run

# Remove all netdata files (this will clear monitoring history)
rm -rf /var/db/system/netdata
# Remove the corrupted core dumps
rm -rf /var/db/system/cores

…I’m not trusting that just yet.

Is this safe to run? How likely is this going to solve my problem? What’s the risk here?
Thanks for reading.