Check for docker image updates error

Recently, the middleware keeps failing when checking for image updates. If it’s a network issue, I can still pull the image with docker pull and reach other sites without trouble. Are there any specific tools or logs that can pinpoint the exact cause?

root@truenas[/var/log]# midclt -t 600 call app.image.op.check_update
Connection closed.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 354, in process_method_call
    result = await method.call(app, id_, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 57, in call
    result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 954, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 771, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/update_alerts.py", line 35, in check_update
    await self.check_update_for_image(tag, image)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/update_alerts.py", line 54, in check_update_for_image
    self.IMAGE_CACHE[tag] = await self.compare_id_digests(
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/update_alerts.py", line 68, in compare_id_digests
    digest = await self._get_repo_digest(registry, image_str, tag_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/client.py", line 92, in _get_repo_digest
    response = await self._get_manifest_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/client.py", line 69, in _get_manifest_response
    response = await self._api_call(manifest_url, headers=headers, mode=mode)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/apps_images/client.py", line 46, in _api_call
    response['response'] = await req.json()
                           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/aiohttp/client_reqrep.py", line 744, in json
    await self.read()
  File "/usr/lib/python3/dist-packages/aiohttp/client_reqrep.py", line 686, in read
    self._body = await self.content.read()
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/aiohttp/streams.py", line 418, in read
    block = await self.readany()
            ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/aiohttp/streams.py", line 440, in readany
    await self._wait("readany")
  File "/usr/lib/python3/dist-packages/aiohttp/streams.py", line 332, in _wait
    raise RuntimeError("Connection closed.")
RuntimeError: Connection closed.

Maybe a rate limit from dockerhub? Or maybe just some temporary routing issues at dockerhub. Technically, could also be other registries. A Connection closed. is done by the remote, which does make it more difficult to precisely pinpoint the cause.

Maybe check how many container images you have (something like sudo docker images --filter "dangling=false"). If you have a ton of images which are no longer required then sudo docker image prune -a will reduce the workload of check_update.

Becuase the middleware uses python it is easy to add some debug logging to it to get more useful logs. I did add some logging statements in check_update_for_image:

    async def check_update_for_image(self, tag, image_details):
        if not image_details['dangling']:
            parsed_reference = self.normalize_reference(tag)
            logging.info("check_update_for_image registry=%s, image=%s, tag=%s", parsed_reference['registry'], parsed_reference['image'], parsed_reference['tag'])
            self.IMAGE_CACHE[tag] = await self.compare_id_digests(
                image_details,
                parsed_reference['registry'],
                parsed_reference['image'],
                parsed_reference['tag']
            )

which gives some more details on in the middlewared.log

[2025/09/25 11:24:52] (INFO) root.check_update():32 - START check_update
[2025/09/25 11:24:52] (INFO) root.check_update_for_image():55 - check_update_for_image registry=ghcr.io, image=linuxserver/transmission, tag=4.0.6
[2025/09/25 11:24:52] (INFO) root.check_update_for_image():55 - check_update_for_image registry=registry-1.docker.io, image=syncthing/syncthing, tag=2.0.9
[2025/09/25 11:24:53] (INFO) root.check_update_for_image():55 - check_update_for_image registry=registry-1.docker.io, image=library/traefik, tag=v3
[2025/09/25 11:24:54] (INFO) root.check_update_for_image():55 - check_update_for_image registry=registry-1.docker.io, image=goofball222/unifi, tag=9.4.19
[2025/09/25 11:24:54] (INFO) root.check_update_for_image():55 - check_update_for_image registry=registry-1.docker.io, image=jellyfin/jellyfin, tag=10.10.7

But changing code should really only be done if you know what you’re doing.

To eliminate errors, all Docker Hub images use mirror.gcr.io; there are no unused images—every image is in active use

root@truenas[~]# docker images
REPOSITORY                                       TAG                                  IMAGE ID       CREATED        SIZE
ghcr.io/henrygd/beszel/beszel                    latest                               8f1647f67f08   12 hours ago   30MB
ghcr.io/amir20/dozzle                            latest                               e8729b22ac67   35 hours ago   58MB
lscr.io/linuxserver/bazarr                       latest                               3754d430d8f0   36 hours ago   422MB
mirror.gcr.io/postgres                           16-alpine                            03b5a062f98e   38 hours ago   281MB
ghcr.io/immich-app/immich-server                 release                              b3909e31743e   39 hours ago   1.71GB
mirror.gcr.io/grafana/grafana                    latest                               1849e2140421   43 hours ago   733MB
mirror.gcr.io/soulter/astrbot                    latest                               e7ee221a1c64   43 hours ago   1.41GB
lscr.io/linuxserver/jellyfin                     latest                               d5156446dffa   45 hours ago   817MB
lscr.io/linuxserver/plex                         latest                               dfc63491d058   2 days ago     368MB
ghcr.io/martabal/qbittorrent-exporter            latest                               0a68a9d42e91   2 days ago     21.6MB
mirror.gcr.io/jetbrains/youtrack                 2025.2.97298                         4dba580b8823   2 days ago     3.35GB
ghcr.io/immich-app/immich-machine-learning       release-cuda                         b96ef72747f8   3 days ago     4.71GB
mirror.gcr.io/prom/prometheus                    latest                               4fcecf061b74   3 days ago     313MB
lscr.io/linuxserver/qbittorrent                  latest                               4a32382af61c   4 days ago     198MB
ghcr.io/open-webui/open-webui                    latest                               9c3f80b1b50f   4 days ago     4.82GB
ghcr.io/berriai/litellm                          main-stable                          c9bcb276cef7   4 days ago     2.2GB
mirror.gcr.io/1dev/server                        latest                               8906ada3fc94   5 days ago     861MB
lscr.io/linuxserver/sonarr                       latest                               5cf2e5bc9d75   5 days ago     205MB
ghcr.io/rachelos/we-mp-rss                       latest                               5ba5bbdd6dd0   6 days ago     1.58GB
mirror.gcr.io/sonatype/nexus3                    latest                               8110b0baa3a5   7 days ago     685MB
ghcr.io/dbccccccc/ttsfm                          latest                               77efc9fbcc46   7 days ago     158MB
lscr.io/linuxserver/prowlarr                     latest                               43f484219c27   8 days ago     192MB
ghcr.io/immich-app/postgres                      16-vectorchord0.4.2-pgvectors0.3.0   b46424ccff66   8 days ago     1.01GB
ghcr.io/flaresolverr/flaresolverr                latest                               b89bbb670634   9 days ago     675MB
mirror.gcr.io/brandawg93/peanut                  latest                               81b9c6d7dd16   9 days ago     171MB
ghcr.io/akpw/mktxp                               latest                               095ff887e7e5   10 days ago    65.5MB
ghcr.io/moghtech/komodo-core                     latest                               6c56c23b197b   10 days ago    640MB
lscr.io/linuxserver/tautulli                     latest                               28440175472a   10 days ago    146MB
ghcr.io/moghtech/komodo-periphery                latest                               dc88c2f6ff7a   10 days ago    434MB
lscr.io/linuxserver/radarr                       latest                               fa23d4acea71   10 days ago    207MB
ghcr.io/lejianwen/rustdesk-api                   full-s6                              0740ed9e0118   13 days ago    88.6MB
ghcr.io/tricked-dev/kanidm-oauth2-manager        latest                               bf68f4aeb798   13 days ago    221MB
ghcr.io/traefik/traefik                          latest                               72fe7ceeba11   2 weeks ago    178MB
ghcr.io/tarampampam/error-pages                  3                                    fbaa1f430f2a   3 weeks ago    25.1MB
mirror.gcr.io/kairlec/dst-admin-go               latest                               8caa0a1d37d9   4 weeks ago    4.32GB
ghcr.io/varun-raj/immich-power-tools             latest                               cdf99bf5c2eb   4 weeks ago    188MB
mirror.gcr.io/kanidm/server                      latest                               d3bb65f18081   4 weeks ago    635MB
ghcr.io/open-webui/pipelines                     latest                               f4dce4fc7cd1   5 weeks ago    3.35GB
ghcr.io/natfrp/frpc                              latest                               158013398cae   5 weeks ago    4.5MB
ghcr.io/analogj/scrutiny                         master-omnibus                       50cc934275aa   5 weeks ago    337MB
ghcr.io/ferretdb/ferretdb                        latest                               e9e07aa98118   6 weeks ago    38.5MB
ghcr.io/ferretdb/postgres-documentdb             latest                               c5d9d9b5ad82   6 weeks ago    1.68GB
mirror.gcr.io/ayufan/proxmox-backup-server       latest                               57211b942e0d   6 weeks ago    671MB
ghcr.io/metatube-community/metatube-server       latest                               12522762c356   6 weeks ago    52.7MB
ghcr.io/dani-garcia/vaultwarden                  latest                               36fd2ebd3761   8 weeks ago    256MB
quay.io/prometheuscommunity/ipmi-exporter        latest                               87101d95e879   2 months ago   35.3MB
mirror.gcr.io/redis                              6-alpine                             b7f611844a19   2 months ago   30.2MB
ghcr.io/prometheus-pve/prometheus-pve-exporter   latest                               3acdad150037   3 months ago   126MB
mirror.gcr.io/dxflrs/garage                      v2.0.0                               2eba99d82e50   3 months ago   25.8MB
mirror.gcr.io/glanceapp/glance                   latest                               fcc6b47e711d   3 months ago   21.9MB
mirror.gcr.io/guovern/iptv-api                   latest                               4ad0cb59e9ce   3 months ago   271MB
mirror.gcr.io/hectorqin/reader                   latest                               68dcf26f62df   4 months ago   180MB
ghcr.io/recyclarr/recyclarr                      latest                               2afd2bb23d0b   7 months ago   120MB
mirror.gcr.io/sunls24/divination                 latest                               f3301e80ff79   8 months ago   197MB
mirror.gcr.io/remirigal/plex-auto-languages      latest                               7daa59a27ef6   2 years ago    78.4MB
mirror.gcr.io/hectorqin/remote-webview           latest                               5d85fb04eb6a   2 years ago    1.04GB
mirror.gcr.io/alturismo/xteve                    latest                               9e99d59097e2   4 years ago    199MB

It is interesting, I did manage to get “Connection closed.” exception a couple of times. But it doesn’t happen consistently. Most of my images are from dockerhub.

If it’s not a issue with the remote, it might be a client issue. I did notice that the middleware reads the response outside the ClientSession block. Which means the connection might be closed by client before the response is read.

I wrote a small testcase to reproduce the error:

import asyncio

from middlewared.plugins.apps_images.client import ContainerRegistryClientMixin

async def main():
    response = await ContainerRegistryClientMixin._api_call("https://microsoftedge.github.io/Demos/json-dummy-data/1MB-min.json")
    print(response)


if __name__ == "__main__":
    asyncio.run(main())

Strange issue—curl returns a response instantly, yet Python acts as if it timed out and failed.


It’s a programming error.

I fixed the bug in client.py.

EDIT: I added the missing response['response_obj'] = req line.

--- client_original.py  2025-09-25 14:52:39.059509747 +0200
+++ /usr/lib/python3/dist-packages/middlewared/plugins/apps_images/client.py    2025-09-25 14:48:23.500734355 +0200
@@ -28,28 +28,25 @@
                     req = await getattr(session, mode)(
                         url, headers=headers, auth=aiohttp.BasicAuth(**auth) if auth else None
                     )
+                    response['response_obj'] = req
+                    if req.status != 200:
+                        response['error'] = f'Received response code {req.status}' + (
+                              f' ({req.content})' if req.content else ''
+                        )
+                    else:
+                         response['response'] = await req.json()
         except asyncio.TimeoutError:
-            response['error'] = f'Unable to connect with {url} in {timeout} seconds.'
+            response['error'] = f'Timeout exceeded: {mode} {url} after {timeout} seconds.'
+        except aiohttp.ContentTypeError as e:
+            # quay.io registry returns malformed content type header which aiohttp fails to parse
+            # even though the content returned by registry is valid json
+            response['error'] = f'Unable to parse response: {e}'
         except aiohttp.ClientResponseError as e:
             response.update({
                 'error': str(e),
                 'error_obj': e,
             })
-        else:
-            response['response_obj'] = req
-            if req.status != 200:
-                response['error'] = f'Received response code {req.status}' + (
-                    f' ({req.content})' if req.content else ''
-                )
-            else:
-                try:
-                    response['response'] = await req.json()
-                except aiohttp.ContentTypeError as e:
-                    # quay.io registry returns malformed content type header which aiohttp fails to parse
-                    # even though the content returned by registry is valid json
-                    response['error'] = f'Unable to parse response: {e}'
-                except asyncio.TimeoutError:
-                    response['error'] = 'Timed out waiting for a response'
+
         return response

     async def _get_token(self, scope, auth_url=DOCKER_AUTH_URL, service=DOCKER_AUTH_SERVICE, auth=None):

It seems that the application fix has caused other issues, and I haven’t checked it carefully yet.

Ok, I forgot to set response['response_obj']

You can add that line there:

Altough I didn’t have time to test it thoroughly.

The TrueNAS team did discover that there is issue, but didn’t figure out the cause:

Thanks, my issue has been resolved, I’ll keep monitoring for a while. When it’s hard to reproduce consistently, it’s easy to assume it’s caused by something else and overlook it.

Yes, the “connection closed” is usually related to a problem with the remote rather than the client. But in this case the connection the connection on the client side (uninentionally).

I created a official bug report here:
https://ixsystems.atlassian.net/browse/NAS-137724

I reported this bug once before, but it was hard to reproduce and got closed.

I see, I hope it won’t get closed as a duplicate.

It’s easier to reproduce with larger responses, which is why my script downloads a 1MB file. Don’t know how large the registry responses usually are, but probably a lot smaller.

Good news, the issue will be fixed in version 25.10.0:

1 Like

Awesome, it really is good news.