Checksum Errors On 1st Scrub

There is no bclone madness.

Data blocks with multiple pointers do not consume extra space. This is true for block-cloning and deduplication.

If you “copy” a 4TB file to another folder or another dataset in the same pool (without encryption), the copy operation will happen very quickly and the pool with not consume an extra 4TB of space.

The zpool command is aware of this and so it will not add 4TB to the “used space” of the pool’s capacity. The zfs command and the TrueNAS GUI is not aware of block-cloning, so it will give you confusing and inaccurate numbers.

1 Like

Not necessarily. I can probably come up with a few ways to get that high for some workloads, but it’s not something we (as in the community in general) typically see for homelab type users. Like…one example talked about on the podcast…Are you doing CI/CD?

1 Like

The QVC was only showing two sticks of RAM for the 96GB settings. I don’t know if that makes a difference.

The primary app interfacing with this pool is an arr stack. The containers live on a separate NVMe pool, download to a dedicated scratch disk pool, and automatically move to the main pool in question.

I also have a Synology pushing backups to this pool via rsync, but it’s only 400GB and a different dataset.

smallie - NVMe docker pool
crappie - SSD SATA scratch pool
largie - main media pool

The source pool shows a bclone ratio of 1x

Is it stored in a differant dataset but still on largie?

Yes, it looks like this:

largie

  • files (synology backups)
  • nas (media files)

But no files from the files dataset interacts with the nas dataset. Also, the files dataset shows normal size information, so it seems like just the nas dataset is affected. Files is also an encrypted dataset.

Well, I have almost certainly discovered the issue. My largie dataset contains a downloads folder that is about 52TB. If I pull up the command from earlier someone had me run, the BCLONE_SAVED value is the same size.

The arr stack uses hardlinks to move files from a download directory to the respective media folder. It appears the TrueNAS GUI is counting these files twice.

Does the GUI not account for hardlinks?
This is in no way a new workflow, why would it just now start reporting it?

Do the source and destination folders exist on the same dataset? If not, then it’s very possible that Radarr is falling back to regular “copies”, which will invoke block-cloning by default.

They are on the same dataset. I plan to delete the links in the downloads directory and see if that changes the bclone and GUI stats.

The issue is fixed!

While looking through the hardlinks, I noticed one movie directory had hundreds if not thousands of entries with the same file, all showing 23G in size. This was only downloaded once so I have no idea how this happened.

Maybe something with the unpacking process or circling back to RAM issues. This was downloaded a month ago while the RAM was unstable but before the scrub.

I deleted the whole directory and the bclone ratio is now 1x and the GUI shows the correct usage.

I want to thank everyone here for all the help. I would also like to apologize to my TrueNAS system. I blamed it for issues that were all entirely my fault.

image

3 Likes

I was going to hire you to shoot a five minute infomercial on the benefits of block-cloning and its space saving magic… but you just had to throw it all away, didn’t you? :-1:

2 Likes

You could still hire me as an example of how to improperly configure TrueNAS :sweat_smile:

3 Likes

The issue happened again and I discovered the cause.

Permissions!!

Radarr/Sonarr are running under a specific user context. In the case of UPGRADES (not new content), this user only had read access to the existing directory. I must have botched permissions when I moved files from Synology and didn’t realize it was an issue since read access was in place - it’s only a problem when the apps try to recycle/delete existing items.

So the app could read and ‘move’ the file to the recycling bin but it doesn’t actually get deleted from the original location. An error is thrown and it can’t import the new upgraded file. I assume it continuously tries to import on some sort of schedule which creates another copy of the file each time in the recycling bin.

I added write permissions to the directory and the file was able to move.

What a journey.

1 Like