Yes - the user did do the “marked my own answer as the solution” thing, however …
- I personally do not have an issue with it if the OP genuinely comes up with the answer rather than another community member.
- In this case OP did say thank you - presumably to unnamed community members rather than to himself.
But of course, ironically my opinion is that the OPs solution was NOT an actual solution at all because there was no technical problem to be solved, in which case the solution chosen was the wrong one.
Let’s wait and see if the OP comes back and says he rebuilt the pool and reloaded the data and he still has the same problem of only 2.7TiB available space. 
Got it, but I feel a bit strange why the storage reduced so much compared to the first time (in the children dataset) when I installed the TrueNAS system. So, thank you for replying and providing so much information for my questions
So what do you get when you run /sbin/zfs list
now?
truenas_admin@truenas[~]$ /sbin/zfs list
NAME USED AVAIL REFER MOUNTPOINT
NAS 164G 3.35T 96K /mnt/NAS
NAS/NAS_SMB 162G 3.35T 162G /mnt/NAS/NAS_SMB
NAS/ix-apps 2.48G 3.35T 124K /mnt/.ix-apps
NAS/ix-apps/app_configs 920K 3.35T 920K /mnt/.ix-apps/app_configs
NAS/ix-apps/app_mounts 51.6M 3.35T 96K /mnt/.ix-apps/app_mounts
NAS/ix-apps/app_mounts/immich 51.2M 3.35T 104K /mnt/.ix-apps/app_mounts/immich
NAS/ix-apps/app_mounts/immich/data 120K 3.35T 120K /mnt/.ix-apps/app_mounts/immich/data
NAS/ix-apps/app_mounts/immich/postgres_data 51.0M 3.35T 51.0M /mnt/.ix-apps/app_mounts/immich/postgres_data
NAS/ix-apps/app_mounts/tailscale 232K 3.35T 96K /mnt/.ix-apps/app_mounts/tailscale
NAS/ix-apps/app_mounts/tailscale/state 136K 3.35T 136K /mnt/.ix-apps/app_mounts/tailscale/state
NAS/ix-apps/docker 2.26G 3.35T 2.26G /mnt/.ix-apps/docker
NAS/ix-apps/truenas_catalog 175M 3.35T 175M /mnt/.ix-apps/truenas_catalog
boot-pool 4.19G 88.3G 24K none
boot-pool/.system 1.60G 88.3G 1.55G legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941 24K 88.3G 24K legacy
boot-pool/.system/configs-b17eb1df7fd94a8281208d511a311fb0 24K 88.3G 24K legacy
boot-pool/.system/cores 24K 1024M 24K legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941 55.0M 88.3G 55.0M legacy
boot-pool/.system/netdata-b17eb1df7fd94a8281208d511a311fb0 1.03M 88.3G 1.03M legacy
boot-pool/.system/nfs 27K 88.3G 27K legacy
boot-pool/.system/samba4 78K 88.3G 78K legacy
boot-pool/ROOT 2.49G 88.3G 24K none
boot-pool/ROOT/25.04.1-2 2.49G 88.3G 173M legacy
boot-pool/ROOT/25.04.1-2/audit 3.65M 88.3G 3.65M /audit
boot-pool/ROOT/25.04.1-2/conf 6.70M 88.3G 6.70M /conf
boot-pool/ROOT/25.04.1-2/data 7.49M 88.3G 7.49M /data
boot-pool/ROOT/25.04.1-2/etc 3.47M 88.3G 3.06M /etc
boot-pool/ROOT/25.04.1-2/home 31.5K 88.3G 31.5K /home
boot-pool/ROOT/25.04.1-2/mnt 25K 88.3G 25K /mnt
boot-pool/ROOT/25.04.1-2/opt 24K 88.3G 24K /opt
boot-pool/ROOT/25.04.1-2/root 128K 88.3G 128K /root
boot-pool/ROOT/25.04.1-2/usr 2.27G 88.3G 2.27G /usr
boot-pool/ROOT/25.04.1-2/var 37.2M 88.3G 3.79M /var
boot-pool/ROOT/25.04.1-2/var/ca-certificates 24K 88.3G 24K /var/local/ca-certificates
boot-pool/ROOT/25.04.1-2/var/lib 15.3M 88.3G 15.2M /var/lib
boot-pool/ROOT/25.04.1-2/var/lib/incus 24K 88.3G 24K /var/lib/incus
boot-pool/ROOT/25.04.1-2/var/log 17.6M 88.3G 5.00M /var/log
boot-pool/ROOT/25.04.1-2/var/log/journal 12.6M 88.3G 12.6M /var/log/journal
boot-pool/grub 1.91M 88.3G 1.91M legacy
Hmm - weird!!
Snapshots or a Checkpoint are the only reasons that would explain why 162GB of data should use 782GB of storage.
The same goes for the ix-apps dataset which has shrunk from 40.3GB to 2.48GB.
i cleaned the snapshot before