Pool size discrepancies between hosts

To start: We are running 2 ixsystems M40 arrays, each in a separate location, onTrueNAS-13.0-STABLE that no longer have an active support contract.

I have a pool on each side named NoRep. This share is not replicated between hosts, and is for servers at either location that can natively write their backs up to two separate locations.

The issue I’m running into is a discrepancy in space usage between the shares, where 1 share show 89TB used and the other shows 55TB. When I audit the actual directories I only see a delta of about 4TB.

As the below commands show, it appears that the discrepancy is within the snapshots.

root@XXXXXXXX0:~ # zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
/NoRep 57.7T 89.2T 61.0T 28.2T 0B 0B

root@XXXXXXXX1:~ # zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
NoRep 93.9T 55.0T 23.1T 31.9T 0B 0B

When digging further into the snapshots themselves, I see a size discrepancy but not of 34TB

root@XXXXXXXX0:~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tank1/NoRepDbak1@auto-20241007.0800-2w 4.65T - 28.3T -
tank1/NoRepDbak1@auto-20241008.0800-2w 45.5M - 28.1T -
tank1/NoRepDbak1@auto-20241009.0800-2w 45.2M - 28.2T -
tank1/NoRepDbak1@auto-20241010.0800-2w 44.4M - 28.3T -
tank1/NoRepDbak1@auto-20241011.0800-2w 44.1M - 28.0T -
tank1/NoRepDbak1@auto-20241012.0800-2w 44.6M - 28.1T -
tank1/NoRepDbak1@auto-20241013.0800-2w 44.1M - 28.0T -
tank1/NoRepDbak1@auto-20241014.0800-2w 45.2M - 27.9T -
tank1/NoRepDbak1@auto-20241015.0800-2w 46.2M - 27.7T -
tank1/NoRepDbak1@auto-20241016.0800-2w 46.0M - 27.9T -
tank1/NoRepDbak1@auto-20241017.0800-2w 46.7M - 28.2T -
tank1/NoRepDbak1@auto-20241018.0800-2w 45.9M - 28.1T -
tank1/NoRepDbak1@auto-20241019.0800-2w 45.6M - 28.1T -
tank1/NoRepDbak1@auto-20241020.0800-2w 45.1M - 28.1T -
tank1/NoRepDbak1@auto-20241021.0800-2w 30.0M - 28.5T -

root@XXXXXXXX1:~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tank1/NoRepDbak1@auto-20241006.1200-2w 1.58T - 31.9T -
tank1/NoRepDbak1@auto-20241007.1200-2w 24.4M - 31.8T -
tank1/NoRepDbak1@auto-20241008.1200-2w 26.3M - 31.9T -
tank1/NoRepDbak1@auto-20241009.1200-2w 27.0M - 31.7T -
tank1/NoRepDbak1@auto-20241010.1200-2w 27.3M - 31.7T -
tank1/NoRepDbak1@auto-20241011.1200-2w 26.5M - 31.8T -
tank1/NoRepDbak1@auto-20241012.1200-2w 25.4M - 31.8T -
tank1/NoRepDbak1@auto-20241013.1200-2w 40.7M - 31.7T -
tank1/NoRepDbak1@auto-20241014.1200-2w 23.9M - 31.6T -
tank1/NoRepDbak1@auto-20241015.1200-2w 23.3M - 31.6T -
tank1/NoRepDbak1@auto-20241016.1200-2w 23.8M - 31.8T -
tank1/NoRepDbak1@auto-20241017.1200-2w 25.7M - 31.8T -
tank1/NoRepDbak1@auto-20241018.1200-2w 27.2M - 31.9T -
tank1/NoRepDbak1@auto-20241019.1200-2w 27.5M - 31.8T -
tank1/NoRepDbak1@auto-20241020.1200-2w 26.0M - 31.9T -

Is there another area I sould or could look at to determine what is using an extra 34TB of space?

The command you used zfs list literally showed snapshot usage of 61T for one, and, 23T for the other. There’s your 38T difference (round numbers). USEDSNAP does not equal sum of list -t snapshot.

Here’s a script I used to run, no idea if it still works, if you want to see actual sizes of snapshots: