asking cos a rsync compare of two pools one with dedup enbled killed my system with OoSwap
There’s the general issue that dedup is a resource hog… Rsync would really, really require the whole DDT in RAM. How much RAM and storage do you have?
32GB RAM I have. no VM running
Quadruple the amount of RAM, and you might have enough to begin to think about dedup.
Yea, we would recommend avoiding the old dedup wherever possible. It has a not so good reputation for a lot of reasons. We have a fast-dedup project in the works with OpenZFS that is expected to land in the fall 24.10 version of TrueNAS SCALE. That version of dedup is expected to be much safer to run in more memory constrained systems, will eject non-dedup-able blocks, etc. Its still actively being worked on with Upstream, but we expect things to merge and land in time for OpenZFS 2.3.
More information here:
@elkmaster Can you run the command zpool status -D
from a shell? This will generate a histogram of your deduplication tables, and show us just how much space is actually being saved (and at what cost to your memory.)
dedup: DDT entries 93359872, size 820B on disk, 182B in core
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 86.2M 10.4T 10.4T 10.4T 86.2M 10.4T 10.4T 10.4T
2 2.66M 319G 317G 317G 5.47M 654G 650G 650G
4 121K 11.0G 10.9G 11.0G 583K 53.0G 52.5G 52.9G
8 5.75K 317M 302M 307M 60.6K 3.22G 3.06G 3.12G
16 575 16.8M 14.9M 15.9M 12.2K 360M 318M 340M
32 1.14K 52.0M 48.6M 49.4M 48.0K 2.03G 1.89G 1.92G
64 564 65.8M 65.8M 65.8M 56.7K 6.71G 6.70G 6.71G
128 28 1.86M 1.84M 1.88M 4.71K 326M 321M 329M
256 12 1.02M 758K 773K 3.80K 309M 222M 227M
512 2 144K 132K 134K 1.69K 118M 108M 109M
2K 1 16K 16K 17.4K 3.25K 51.9M 51.9M 56.6M
4K 1 16K 16K 17.4K 4.53K 72.5M 72.5M 79.0M
64K 1 128K 128K 128K 112K 13.9G 13.9G 13.9G
Total 89.0M 10.8T 10.7T 10.7T 92.6M 11.2T 11.1T 11.1T
So pulling this apart a bit:
dedup: DDT entries 93359872, size 820B on disk, 182B in core
This line shows the utilization of your deduplication tables, by multiplying “number of entries” by “size in core (memory)” we get:
16991496704 bytes = 15.8G of RAM
So you’re losing about half of your RAM just to index the deduplication tables.
Space savings is the bottom row of the table:
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
Total 89.0M 10.8T 10.7T 10.7T 92.6M 11.2T 11.1T 11.1T
You’re saving 11.1T - 10.7T = ~400G
of space, or a dedup ratio of about 1.04
.
Put simply, dedup probably isn’t worth it here.
And that’s probably largely applicable to most dedup settings in home NAS.
Still that leaves a problem here: The 16 GB DDT should fit in 32 GB RAM and leave enough for the system to operate (“no VM”). Why is it running out of swap?
So what can I do to provide more (debug)information for this ?