Hesitating move to SCALE due to Docker hub security issues

I’ll try to be brief. I have have been putting off migrating to Scale because I have three jails for internal (LAN) use only, and while the Docker ecosystem seems practical, I hesitate due to the big hoolabaloo a year ago or so about so many images on the docker hub being infected with various kinds of malware. From my understanding, not just “crap images” but also images that were widely used were compromised.

I admit I am speaking from my impressions, but I don’t know to what extent I can trust images such as Nextcloud, Elasticsearch, MongoDB, Wordpress, Graylog, etc. Sure they are official images and not just some image someone uploaded and published themselves as a solution, but I need to trust they never get compromised since every time a container is restarted I run the risk that the image has been infected since I last started the container. Even if I trust them when I create the container, I have to trust that they never get compromised in the future.

Perhaps I am wrong, but my impression is that dockerhub is a much more open ecosystem and easier to compromise compared to traditional repos of e.g. FreeBSD and Debian. Or am I just afraid of something I don’t grasp? FWIW as context, I never use user-submitted repos such as AUR for Arch for exemple, or third-party repos in Debian/Ubuntu.

Your thoughts would be much appreciated.

You can use your own images you build or images from non docker hub sources, there are other repos if any of those make you feel safer. You can pull from source code and build images. A number of alternatives.

Indeed.

If you prefer you can also set an app up to use a specific version instead of the “latest” one.
If you keep yourself informed of app updates you can then selectively install newer versions.

Most of the official publishers of the containers/apps you mention ONLY publish to Dockerhub or GitHub for their containers. If you want to use Apps aka containers you’ll have to do this. The images on Dockerhub are very easy to inspect and see, for example, Nextcloud 31.0.7-fpm here → https://hub.docker.com/layers/library/nextcloud/31.0.7-fpm/images/sha256-2833ed1869621a8691eeeb44f8d949fef5ab944b7982b4a243642d80f85a7c4c

If you’re using Jails you can create LXC’s or VM’s and do it all yourself, but most users prefer apps, those coming from FreeBSD and don’t use Docker/containers will likely find the former more familiar

If someone isn’t willing to use container’s directly from github, that makes me wonder what would be acceptable.

Even making your own containers and hosting a private repository instead of using the official containers would require you to pull the source from, typically, github.

If you don’t trust Docker images you can use VMs or Instances to set up your application as a traditional Linux install.

that is of course, if you trust the distro packaging.

Well, sure, unless, there was a flaw with crypto mining or something else in their build process (has happened) or a weakness in docker hub security and therefore the image stored in docker hub does not match what would happen if built from source. I am guessing that is what he is wanting to protect against. Not thinking the risk is terribly high but still.

So, if someone thinks other repos have better security say linuxserver or various others, why not use those instead.

Sure, but your example with linuxserver illustrates my point I think. Linuxserver hosts the source code on github. If their github account is compromised then every downstream container delivery method they have is tainted.

*shrug*

Not a hill I will die on, just curious what the reasoning behind not trusting github is at this point. And by extension, what the recourse ends up being.

The SOURCE may not be tainted, just the build. I am not the one not trusting it, just giving options for the OP.

Thanks everyone for you input!

I’ll continue acquainting myself with Docker. At the moment I think my choice will be to begin from a verified publisher image (such as Debian, though I haven’t checked if it exists, maybe bitnami/minideb) and then build my own images for e.g. Nextcloud and Graylog with packages from the Debian repos. Since they will not be exposed externally there is less need to keep them continuously up to date, so I can rebuild them once a year maybe. Perhaps this approach is “trust no-one” overkill, but at least I will learn something while doing it.

Please feel free to continue the discussion or add anything to the thread you deem interesting; I’ll keep an eye on this thread.

You’re not “trusting no-one”, you’re just moving your trust around.

There can be supply chain attacks anywhere, but assuming the software Debian hosts is inherently safer than builds (docker or otherwise) by the actual developer is pushing it quite far.

What you need to do to protect against this security risk, is downloading and verifying image signatures and using digest-pinning to prevent already published images being changed retroactively.

Preferably SBOM analysis as well.

Thats the correct way of dealing with this, building your own images from Debian packages is just moving goalposts around.

I’ve no clue what the heck you’re referring too.
But I guess some overly popular crap article.

Its not like magically popular images got infected with malware due to the docker build process and even then, when you digest pin it wont even affect you until you update the pinned digest/tag

Edit
I think you meant articles like this one:

Or this one:

Which is a security companies promoting their products. By publishing blogs about something obvious

But even their conclusion was “use trusted image sources, not random crap containers”

“dockerhub” is just a place to dump images onto (under your own user account, you cannot override other peoples containers on their either), obviously there are hundreds if not thousands of crappy or outdated containers.

edit 2*

No, it doesn’t work that way, first off: Containers are cached and not “redownloaded every time”

But even so, good containers (and apps) use “digest pinning” to cryptographically ensure that a specific image build is downloaded.

It cannot be “overruled” from the docker hub side.
okey: the image the digest links to can be forcefully removed, yes. But that would just mean the image couldn’t be pulled anymore using that digest.

1 Like

Agree with your post, but OP might be using pull_policy: always which ignores the cache for images.

there are some examples of “download every time” like the Official Plex image that downloads a dpkg and installs it at run-time or have other similarly questionable methods like this… not the container as a whole though.

So…
What EXACTLY tells you that the code you did download and installed in those jails, was “not compromised”!?

Docker is here to stay.
It is up to the human to do their RESEARCH before simply trusting or running code you do not know!

Where exactly you change or configure SUCH policy then!?