Constantly repeating log messages in k3s_deamon.log about failing unmounts of volumes that could not be found

I noticed that I have some constantly repeating log messages in k3s_deamon.log about failing unmounts of volumes that could not be found. Some examples:

May  2 21:00:33 TrueNAS k3s[6597]: I0502 21:00:33.911445    6597 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd\" (UniqueName: \"kubernetes.io/csi/zfs.csi.openebs.io^pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd\") pod \"bd755013-0b87-4f38-8805-517d8c2503d8\" (UID: \"bd755013-0b87-4f38-8805-517d8c2503d8\") "
May  2 21:00:33 TrueNAS k3s[6597]: E0502 21:00:33.923032    6597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd podName:bd755013-0b87-4f38-8805-517d8c2503d8 nodeName:}" failed. No retries permitted until 2024-05-02 21:02:35.922959372 +0200 CEST m=+26880.624873990 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd") pod "bd755013-0b87-4f38-8805-517d8c2503d8" (UID: "bd755013-0b87-4f38-8805-517d8c2503d8") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Internal desc = not able to get the ZFSVolume pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd err : zfsvolumes.zfs.openebs.io "pvc-8c01dbfa-6848-4c94-be8f-d32091fa7afd" not found
May  2 21:00:34 TrueNAS k3s[6597]: I0502 21:00:34.218933    6597 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7\" (UniqueName: \"kubernetes.io/csi/zfs.csi.openebs.io^pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7\") pod \"413a6619-f4aa-4b13-b5e4-897f30f161c7\" (UID: \"413a6619-f4aa-4b13-b5e4-897f30f161c7\") "
May  2 21:00:34 TrueNAS k3s[6597]: I0502 21:00:34.219359    6597 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c\" (UniqueName: \"kubernetes.io/csi/zfs.csi.openebs.io^pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c\") pod \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\" (UID: \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\") "
May  2 21:00:34 TrueNAS k3s[6597]: I0502 21:00:34.219976    6597 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa\" (UniqueName: \"kubernetes.io/csi/zfs.csi.openebs.io^pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa\") pod \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\" (UID: \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\") "
May  2 21:00:34 TrueNAS k3s[6597]: I0502 21:00:34.220233    6597 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977\" (UniqueName: \"kubernetes.io/csi/zfs.csi.openebs.io^pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977\") pod \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\" (UID: \"2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1\") "
May  2 21:00:34 TrueNAS k3s[6597]: E0502 21:00:34.229473    6597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7 podName:413a6619-f4aa-4b13-b5e4-897f30f161c7 nodeName:}" failed. No retries permitted until 2024-05-02 21:02:36.229414377 +0200 CEST m=+26880.931328965 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7") pod "413a6619-f4aa-4b13-b5e4-897f30f161c7" (UID: "413a6619-f4aa-4b13-b5e4-897f30f161c7") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Internal desc = not able to get the ZFSVolume pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7 err : zfsvolumes.zfs.openebs.io "pvc-c5105bf7-56d1-4a6e-b286-41623b1685f7" not found
May  2 21:00:34 TrueNAS k3s[6597]: E0502 21:00:34.232522    6597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977 podName:2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1 nodeName:}" failed. No retries permitted until 2024-05-02 21:02:36.232458812 +0200 CEST m=+26880.934373400 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977") pod "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1" (UID: "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Internal desc = not able to get the ZFSVolume pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977 err : zfsvolumes.zfs.openebs.io "pvc-07cdb26c-68e1-41cb-83d6-eaee85c1f977" not found
May  2 21:00:34 TrueNAS k3s[6597]: E0502 21:00:34.235292    6597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c podName:2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1 nodeName:}" failed. No retries permitted until 2024-05-02 21:02:36.235227538 +0200 CEST m=+26880.937142136 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c") pod "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1" (UID: "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Internal desc = not able to get the ZFSVolume pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c err : zfsvolumes.zfs.openebs.io "pvc-3ae2b4f3-724f-42d6-9b3b-28db39e7cb5c" not found
May  2 21:00:34 TrueNAS k3s[6597]: E0502 21:00:34.235603    6597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa podName:2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1 nodeName:}" failed. No retries permitted until 2024-05-02 21:02:36.235517238 +0200 CEST m=+26880.937431806 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa" (UniqueName: "kubernetes.io/csi/zfs.csi.openebs.io^pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa") pod "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1" (UID: "2e665ee9-2b14-4f8d-8ce8-7fbe73bbead1") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Internal desc = not able to get the ZFSVolume pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa err : zfsvolumes.zfs.openebs.io "pvc-c0f5af8d-b0ac-45f8-81ea-cd84cc600cfa" not found

What could be the cause of that, and how do I fix this?

I’m having the same issue, looking through the logs, the same 43 PVCs attempt to be deleted every 2 minutes and 2 seconds, and it is persistant across reboots.

None of those PVCs exist on my system so it’s normal for k3s not to find them.

It started right after the migration from Cobia to Dragonfish for me.
I think it could be related to OpenEBS-ZFS Driver Removal Notice | TrueNAS Community

Yes, that is probably right. I don’t know much about kubernetes / k3s, and probably because of that have no idea what to do about this yet or know what would be good phrases to search for. There must be something that still has references to these volumes, but again, I have no idea where to look or what to search for to find answers :frowning:

Upgrade to 24.04.1 did not solve the issue.

I’ve been in the process of migrating all my TrueCharts apps to Truenas apps for performance and I suspect removing Truecharts entirely, most notably the openebs chart might help the issue, I’ll keep updating.

Did you have any luck resolving this? This seems to cause my system to grind to a halt after a day or two of these being around.

@paulo: No, not yet. For me, though it causes some load, it’s not enough to slow down my system too much (though it’s just an Atom C2550).

Update: As I wanted to migrate away from TrueCharts to the offical catalogue in order to be prepared for the upcoming TrueNAS Scale release, I deleted the ix-applications data set in order to start with a clean slate. This seemed to have solved the issue, as I don’t see these messages anymore in the log.