Error counting eligible snapshots. cannot open

ElectricEel-24.10.1

i setup a test system for scale. This replication was setup a week or two ago and was fine. It still works fine.

Why when i look into its setting does it say at the bottom in red

Error counting eligible snapshots. cannot open 'main/dataset': dataset does not exist

It never used to, it didn’t say that when i set it up?

The dataset definitely exists, it’s taking snapshots of it fine, it’s sending them over to be replicated on the other system fine.

I don’t get it?

truenas_admin@TruenasScale ~ $ sudo zfs list main/dataset
[sudo] password for truenas_admin: 
NAME           USED  AVAIL  REFER  MOUNTPOINT
main/dataset   432K   844G   104K  /mnt/main/dataset
truenas_admin@TruenasScale ~ $ sudo zfs list -t snapshot -r main/dataset
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
main/dataset@auto-2024-12-10_13-16      64K      -    96K  -
main/dataset@manual-2024-12-10_16-11     0B      -    96K  -
main/dataset@auto-2024-12-10_17-00       0B      -    96K  -
main/dataset@auto-2024-12-10_18-00       0B      -    96K  -
main/dataset@auto-2024-12-10_19-00      64K      -    96K  -
main/dataset@auto-2024-12-10_20-00      64K      -    96K  -
main/dataset@manual-2024-12-10_23-50     0B      -    96K  -
main/dataset@auto-2024-12-11_00-00       0B      -    96K  -
main/dataset@auto-2024-12-11_01-00       0B      -    96K  -
main/dataset@auto-2024-12-11_02-00       0B      -    96K  -
main/dataset@auto-2024-12-11_03-00       0B      -    96K  -
main/dataset@auto-2024-12-11_04-00       0B      -    96K  -
main/dataset@auto-2024-12-11_05-00       0B      -    96K  -
main/dataset@auto-2024-12-11_06-00       0B      -    96K  -
main/dataset@auto-2024-12-11_07-00       0B      -    96K  -
main/dataset@auto-2024-12-11_08-00       0B      -    96K  -
main/dataset@auto-2024-12-11_09-00       0B      -    96K  -
main/dataset@auto-2024-12-11_10-00       0B      -    96K  -
main/dataset@auto-2024-12-11_11-00       0B      -    96K  -
main/dataset@auto-2024-12-11_12-00       0B      -    96K  -
main/dataset@auto-2024-12-11_13-00       0B      -    96K  -
main/dataset@auto-2024-12-11_14-00       0B      -    96K  -
main/dataset@auto-2024-12-11_15-00       0B      -    96K  -
main/dataset@auto-2024-12-11_16-00       0B      -    96K  -
main/dataset@auto-2024-12-11_17-00       0B      -    96K  -
main/dataset@auto-2024-12-11_18-00       0B      -    96K  -
main/dataset@auto-2024-12-11_19-00       0B      -    96K  -
main/dataset@auto-2024-12-11_20-00       0B      -   104K  -
main/dataset@auto-2024-12-11_21-00       0B      -   104K  -
main/dataset@auto-2024-12-11_22-00       0B      -   104K  -
main/dataset@auto-2024-12-11_23-00       0B      -   104K  -
main/dataset@auto-2024-12-12_00-00       0B      -   104K  -
main/dataset@auto-2024-12-12_01-00       0B      -   104K  -
main/dataset@auto-2024-12-12_02-00       0B      -   104K  -
main/dataset@auto-2024-12-12_03-00       0B      -   104K  -
main/dataset@auto-2024-12-12_04-00       0B      -   104K  -
main/dataset@auto-2024-12-12_05-00       0B      -   104K  -
main/dataset@auto-2024-12-12_06-00       0B      -   104K  -
main/dataset@auto-2024-12-12_07-00       0B      -   104K  -
main/dataset@auto-2024-12-12_08-00       0B      -   104K  -
main/dataset@auto-2024-12-12_09-00       0B      -   104K  -
main/dataset@auto-2024-12-12_10-00       0B      -   104K  -
main/dataset@auto-2024-12-12_11-00       0B      -   104K  -
main/dataset@auto-2024-12-12_12-00       0B      -   104K  -
main/dataset@auto-2024-12-12_13-00       0B      -   104K  -
main/dataset@auto-2024-12-12_14-00       0B      -   104K  -
main/dataset@auto-2024-12-12_15-00       0B      -   104K  -
main/dataset@auto-2024-12-12_16-00       0B      -   104K  -
main/dataset@auto-2024-12-12_17-00       0B      -   104K  -
main/dataset@auto-2024-12-12_18-00       0B      -   104K  -
main/dataset@auto-2024-12-13_00-00       0B      -   104K  -
main/dataset@auto-2024-12-14_00-00       0B      -   104K  -
main/dataset@auto-2024-12-15_00-00       0B      -   104K  -
main/dataset@auto-2024-12-16_00-00       0B      -   104K  -
main/dataset@auto-2024-12-17_00-00       0B      -   104K  -
main/dataset@auto-2024-12-18_00-00       0B      -   104K  -
main/dataset@auto-2024-12-19_00-00       0B      -   104K  -
main/dataset@auto-2024-12-20_00-00       0B      -   104K  -
main/dataset@auto-2024-12-21_00-00       0B      -   104K  -
truenas_admin@TruenasScale ~ $ tail -f /var/log/messages
tail: cannot open '/var/log/messages' for reading: Permission denied
tail: no files remaining
truenas_admin@TruenasScale ~ $ sudo tail -f /var/log/messages
Dec 21 10:17:57 TruenasScale kernel: vetha4829cf: renamed from eth0
Dec 21 10:17:57 TruenasScale kernel: br-d54ec385c59e: port 1(vethcaf471f) entered disabled state
Dec 21 10:17:57 TruenasScale kernel: br-d54ec385c59e: port 2(veth522328a) entered disabled state
Dec 21 10:17:57 TruenasScale kernel: veth522328a (unregistering): left allmulticast mode
Dec 21 10:17:57 TruenasScale kernel: veth522328a (unregistering): left promiscuous mode
Dec 21 10:17:57 TruenasScale kernel: br-d54ec385c59e: port 2(veth522328a) entered disabled state
Dec 21 10:17:57 TruenasScale kernel: br-d54ec385c59e: port 1(vethcaf471f) entered disabled state
Dec 21 10:17:57 TruenasScale kernel: vethcaf471f (unregistering): left allmulticast mode
Dec 21 10:17:57 TruenasScale kernel: vethcaf471f (unregistering): left promiscuous mode
Dec 21 10:17:57 TruenasScale kernel: br-d54ec385c59e: port 1(vethcaf471f) entered disabled state

and /var/log/zettarepl.log

truenas_admin@TruenasScale ~ $ sudo tail -f /var/log/zettarepl.log
[2024/12/21 13:22:02] DEBUG    [IoThread_18] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:24] Connecting...
[2024/12/21 13:22:02] DEBUG    [IoThread_9] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:25] Connecting...
[2024/12/21 13:22:02] DEBUG    [IoThread_18] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:24] [async_exec:24] Running ['zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name', '-s', 'name', '-d', '1', 'main/dataset'] with sudo=False
[2024/12/21 13:22:02] DEBUG    [IoThread_9] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:25] [async_exec:25] Running ['zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name', '-s', 'name', '-d', '1', 'main/dataset'] with sudo=False
[2024/12/21 13:22:02] DEBUG    [IoThread_18] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:24] [async_exec:24] Reading stdout
[2024/12/21 13:22:02] DEBUG    [IoThread_18] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:24] [async_exec:24] Waiting for exit status
[2024/12/21 13:22:02] DEBUG    [IoThread_18] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:24] [async_exec:24] Error 1: "cannot open 'main/dataset': dataset does not exist\n"
[2024/12/21 13:22:02] DEBUG    [IoThread_9] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:25] [async_exec:25] Reading stdout
[2024/12/21 13:22:02] DEBUG    [IoThread_9] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:25] [async_exec:25] Waiting for exit status
[2024/12/21 13:22:02] DEBUG    [IoThread_9] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:25] [async_exec:25] Error 1: "cannot open 'main/dataset': dataset does not exist\n"
[2024/12/21 13:36:45] DEBUG    [IoThread_12] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:26] Connecting...
[2024/12/21 13:36:45] DEBUG    [IoThread_7] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:27] Connecting...
[2024/12/21 13:36:45] DEBUG    [IoThread_12] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:26] [async_exec:26] Running ['zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name', '-s', 'name', '-d', '1', 'main/dataset'] with sudo=False
[2024/12/21 13:36:45] DEBUG    [IoThread_7] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:27] [async_exec:27] Running ['zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name', '-s', 'name', '-d', '1', 'main/dataset'] with sudo=False
[2024/12/21 13:36:45] DEBUG    [IoThread_12] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:26] [async_exec:26] Reading stdout
[2024/12/21 13:36:45] DEBUG    [IoThread_7] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:27] [async_exec:27] Reading stdout
[2024/12/21 13:36:45] DEBUG    [IoThread_12] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:26] [async_exec:26] Waiting for exit status
[2024/12/21 13:36:45] DEBUG    [IoThread_12] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:26] [async_exec:26] Error 1: "cannot open 'main/dataset': dataset does not exist\n"
[2024/12/21 13:36:45] DEBUG    [IoThread_7] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:27] [async_exec:27] Waiting for exit status
[2024/12/21 13:36:45] DEBUG    [IoThread_7] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:27] [async_exec:27] Error 1: "cannot open 'main/dataset': dataset does not exist\n"
[2024/12/21 13:36:56] DEBUG    [IoThread_4] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:28] Connecting...
[2024/12/21 13:36:56] DEBUG    [IoThread_4] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:28] [async_exec:28] Running ['zfs', 'list', '-t', 'filesystem,volume', '-H', '-o', 'name', '-s', 'name', '-r'] with sudo=False
[2024/12/21 13:36:56] DEBUG    [IoThread_4] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:28] [async_exec:28] Reading stdout
[2024/12/21 13:36:56] DEBUG    [IoThread_4] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:28] [async_exec:28] Waiting for exit status
[2024/12/21 13:36:56] DEBUG    [IoThread_4] [zettarepl.transport.base_ssh] [ssh:root@192.168.1.100] [shell:28] [async_exec:28] Success: 'NAS-main\nNAS-main/.system\nNAS-main/.system/configs-daeb7be0eae547028f28998beacf9023\nNAS-main/.system/cores\nNAS-main/.system/rrd-daeb7be0eae547028f28998beacf9023\nNAS-main/.system/samba4\nNAS-main/.system/services\nNAS-main/.system/syslog-daeb7be0eae547028f28998beacf9023\nNAS-main/.system/webui\nNAS-main/cavern\nNAS-main/home\nNAS-main/iocage\nNAS-main/iocage/download\nNAS-main/iocage/download/13.1-RELEASE\nNAS-main/iocage/download/13.2-RELEASE\nNAS-main/iocage/download/13.3-RELEASE\nNAS-main/iocage/download/13.4-RELEASE\nNAS-main/iocage/images\nNAS-main/iocage/jails\nNAS-main/iocage/jails/adguard\nNAS-main/iocage/jails/adguard/root\nNAS-main/iocage/jails/jellyfin\nNAS-main/iocage/jails/jellyfin/root\nNAS-main/iocage/jails/nextcloud2\nNAS-main/iocage/jails/nextcloud2/root\nNAS-main/iocage/jails/plex\nNAS-main/iocage/jails/plex/root\nNAS-main/iocage/jails/qbittorrent-jail\nNAS-main/iocage/jails/qbittorrent-jail/root\nNAS-main/iocage/jails/syncthing-jail\nNAS-main/iocage/jails/syncthing-jail/root\nNAS-main/iocage/log\nNAS-main/iocage/releases\nNAS-main/iocage/releases/13.1-RELEASE\nNAS-main/iocage/releases/13.1-RELEASE/root\nNAS-main/iocage/releases/13.2-RELEASE\nNAS-main/iocage/releases/13.2-RELEASE/root\nNAS-main/iocage/releases/13.3-RELEASE\nNAS-main/iocage/releases/13.3-RELEASE/root\nNAS-main/iocage/releases/13.4-RELEASE\nNAS-main/iocage/releases/13.4-RELEASE/root\nNAS-main/iocage/templates\nNAS-main/mandie-home\nNAS-main/manjaro-home\nNAS-main/media\nNAS-main/nextcloud\nNAS-main/nextcloud/config\nNAS-main/nextcloud/db\nNAS-main/nextcloud/files\nNAS-main/nextcloud/themes\nNAS-main/scale-replication\nNAS-main/syncthing-data\nboot-pool\nboot-pool/.system\nboot-pool/.system/configs-b17eb1df7fd94a8281208d511a311fb0\nboot-pool/.system/cores\nboot-pool/.system/rrd-b17eb1df7fd94a8281208d511a311fb0\nboot-pool/.system/samba4\nboot-pool/.system/services\nboot-pool/.system/syslog-b17eb1df7fd94a8281208d511a311fb0\nboot-pool/.system/webui\nboot-pool/ROOT\nboot-pool/ROOT/13.0-U6.2\nboot-pool/ROOT/13.0-U6.3\nboot-pool/ROOT/13.3-RELEASE\nboot-pool/ROOT/13.3-U1\nboot-pool/ROOT/Initial-Install\nboot-pool/ROOT/default\nsecondary\nsecondary/cavern\nsecondary/home\nsecondary/mandie-home\nsecondary/manjaro-home\nsecondary/nextcloud\nsecondary/nextcloud/config\nsecondary/nextcloud/db\nsecondary/nextcloud/files\nsecondary/nextcloud/themes\n'

Keeps saying main/dataset does not exist but then takes a snapshot fine, then replicates it to the other server fine? Odd.

I have tried to recreate the replication task. It seems to be when i put the naming schema in that it starts giving the error

auto-%Y-%m-%d_%H-%M

Even though there is nothing wrong (that i can see) with the “naming schema”

Same issue here,maybe it is a bug.

i suspect it is, but its sometimes a right pain when you cant see how many or even if it can see any snaps. Especially if you have replication from scratch on.

just have to assume it is a bug

1 Like

Then I recommend you report it to iX using the Report button in the TrueNAS UI.

I wonder though, in the intial screenshots I see that the dataset “main” isn’t marked for replication, only the dataset “dataset” is checked, but you later refer to “main” in the “Periodic Snapshot Tasks”.

I wonder if that could cause an issue.
Is there any reason why you don’t want the parent dataset “main” to be replicated?

well yea main is/was the root dataset, i prefer to control exactly which datasets i snap/replicate one at a time rather that just a big recursive grab at the base dataset and everything in it. Just more granular. Some datasets need to be replicated often other’s not so much etc etc.

Its easy to reproduce, just fill out any remote replication (i have not tried with local)

before you add anything to the “also include naming schema” box.

and directly after putting a valid schema in the box.

PS,
Im to busy to be posting bugs. I have a much larger problem with the fact that you cant replicate from core to scale !! , iv had to spend, well it’ll be several days trying to figure it, giving up, deciding to use scale as my main system instead because i can replicate from scale to core, then trying to sort out app the apps is a nightmare, Iv been just on syncthing for the whole of today. Can not do test replication from Core to Scale

Yeah … I see the same issue as you @jackdinn and @Gaomin_Liao .

Has anyone of you already reported this?

It can indeed be easily recreated and is quite annoying as I am replicating snapshots now “blindly” from one source to two (tiered) different backup TN scale systems.

I am seeing the same issue. I already have a local replication task to replicate between two pools which works. So it seems to be specifically broken for remote replication

Having the same problem. Works in 24.10.0.2, broken from 24.10.1 (and still in 24.10.2).

Created jira ticket NAS-133955

1 Like

Is it broken as in the counting is broken or does it simply not work for you?