Continuous writing activity on the boot disk every second

I recently did a fresh install of TrueNAS SCALE and noticed a problem. There is continuous writing activity on the boot disk every second. I’ve already set the “Storage Settings” in System Settings → Advanced to use a zvolume instead of the boot disk, but the issue persists.

Everything worked fine with TrueNAS CORE. Does anyone have any suggestions on how to fix this? Any help would be greatly appreciated!

“Everything worked fine under CORE”.

Well, a few extra small writes doesn’t mean “everything is broken”, right?

This could be the logs being written, but it can also be the smart disk checks.

Guessing you have not configured a timeout for the disks to go to sleep (needed for the smart checks to NOT happen when the disk is asleep).

Did you change the default setting of the SMART service?
(from Never to Standby).

It would be good to know what your version is, and what kind of services / workload you have configured. But in general SCALE tends to do much more logging / auditing of things, so you can expect to see a bit more disk noise to the boot device. Not necessarily a bug. But we need to know those other details and so if we can see if those writes are beyond norms.

Yes, correct, of course, it doesn’t mean that “everything is broken”.
I haven’t configured a timeout for the disks to go to sleep.
I also haven’t changed the default setting of the SMART service.

I am using Dragonfish-
System is fresh installed (installed 3 times)
System is fresh no plugins enabled, nothing.
Only SMB service is enabled

Yep, that should do it for non-boot or non-apps (ix-applications directory) disks.

The inherent logging/audit may still not allow the boot disk to go to sleep and due to the logging/audit, there will always be small writes on it.

(ix-applications directory should have some activity also, but sporadic or only when checking/modifying Apps).

That’s why I mentioned that in the CORE version is fine, and I don’t see any activity.
But here, it’s not occasional activity; it’s constant writing.
Is there any way to fix this?
I’m also curious to know if this problem is observed by everyone or only in certain setups.

My install is similar.

That comes out to just over 14GiB of writes per day if the mean stays the same.
It’s a home lab install with SMB up as well as Jellyfin and a couple other apps running (no heavy apps like arr’s or similar), no VM:s at this point in time.

I can’t say I feel worried my boot drive is going to die because of it but it does feel excessive.
Is that data useful? I think it’s reports data, but that’s really just speculation.

1 Like

5TB per year for nothing

1 Like

It’s not for nothing. If you ever have trouble and need to submit a case, they’ll ask you for logs and those excessive writes will come in handy.

It remains to be seen how much logging will be done in Electric Eel with the eventually docker based apps.

14GB a day does seem really excessive, even for the logging we typically do. We’d need more context or a debug from your system to figure out if something is writing a lot more than it should be in your case.

1 Like

I understand what the logs are for.
I’m worried that at the moment I don’t need them and I don’t know how to make them smaller in order to prevent unnecessary wear on the disk.

Same in my case over 10GB on fresh system.

Tickets + Debugs please, and we’ll investigate.


Thank you!

To be fair and transparent.

Seeing that many/most people have this condition, even on a new install, your QA Team should be able to reproduce right away.

“No ticket, no service”, I get it, but this is not a Helpdesk.
This is a professional QA Team.

We already are pretty sure we know the culprit in this case. But I will always tend to error on the side of more data than less. Often we find one issue easily. But with additional data we are more confident if the fix will work universally or if there are additional factors which need accounting for in a proposed solution.

The irony is strong :slight_smile:

1 Like

My writes are 500 KiB/second over the course of a day, average, assuming you believe the reporting system. So, if I convert right, that’s about 44GB/day. I have 4 VMs and 19 apps. MY syslog level is set to info though, not sure if that’s the default or not.

Just a follow-up. We’re still looking into it, but so far the evidence seems to be pointing at a reporting issue specifically. Actual writes going to boot we see on our systems, as well as the debug from @ars is really low, like in the XX MB day, not GB’s a day. That said we’ll keep looking to see if there’s anything else major to fix here. Keep an eye on that ticket and we’ll update as we have a proper resolution, but for now nothing to really panic about.