The Unpleasant CI of Docker Builds

Bartosz Wiśniewski, May 27, 2019

4 min read

Continuous Integration

Docker builds are easy to run, but they become problematic on a large scale. It’s similar to a pipeline, but because of its limitations, it just can’t replace a full-blown CI tool. What’s interesting is that Docker builds runs by using containers. The images are just snapshots of the container that was executed during the build. But what if the CI context is also a container? Can you run Docker inside Docker?


In August 2013, Jérôme Petazzoni made it possible to run Docker inside Docker. And… two years later he published an article with one clear instruction: to avoid doing that.

The solution

As Jérôme Petazzoni has shown in his article — a better way of solving this issue is by sharing the Docker daemon with the running container so that Docker can use it to create a “sibling” container rather than a “child” one. You can test it by running an official Docker image and sharing the socket to the host instance.

docker run -v /var/run/docker.sock:/var/run/docker.sock -it docker

Running it in the cloud

I wouldn’t be writing this article if things were that “easy”.

In the cloud, you want to have containers under control. Normally, this is accomplished by some kind of manager, for example, the Kubernetes scheduler. Now imagine that one of your containers can create new ones that are invisible for this schedule — nightmare. There’s no simple way of solving that.

Docker without Docker

What if I told you that you might not need Docker at all.

kaniko is a Google project for building Docker images without the Docker daemon. It promises great caching and compatibility with Dockerfiles. Magic. Now you just need to run a normal container, execute a command, and wait for it to finish! Except you’d probably get an error.

Kaniko — a great idea, but it’s not ready yet

When letter casing matters…

FROM node:lts-alpine as Builder


Nothing special, huh? Multi-stage Docker builds are great. Yet, for some reason kaniko doesn’t like uppercase letters in the stage name, so you have to do this:


FROM node:lts-alpine as builder

open /etc/passwd: no such file or directory

Another problem that I have faced myself is running a non-root user in the container, which is a good practice. In my case, the cached layer with adding user command seems to have no effect and kaniko just returns an error. Docker works flawlessly.

Maybe we are not ready yet

At the time of writing this article, there are 101 open and 212 closed issues on kaniko’s GitHub repository. While kaniko is a great idea, I don’t think it’s ready to be recommended by the Google Cloud Build documentation. There’s much room for improvement.

However, there are some alternatives you can use here. One of the most promising, img, follows the same idea. While it manages to build my image, it wasn’t the greatest experience either.

At the end, you need to ask yourself — does it even matter? For now, those imperfections don’t compare to solving real problems. So I went back to Docker inside Docker in the Cloud Build even though it’s not perfect. If you need your own solution though, you may reconsider using the good old VMs.