Building custom ARM docker images

In this series, we have built up a Kubernetes cluster out of Raspberry Pis. We have added the ability to automatically procure TLS certificates for our deployments. And we have installed our own docker registry to store our custom images. Now it is time to actually build and deploy those custom images. In this article, we will learn how to build our own images, from the convenience of our development PC, and then deploy them on our cluster. In the process, we will learn some common development and deployment work flows by actually doing them. There is a lot to do in this one, so let's get started!

2021-09-07: This article has been updated for k3s versions v1.21.0+k3s1 and above. This version of k3s uses Traefik v2 (for new installations) which has different configuration syntax than was used in the original article. If you need a reference to the old configurations, they can be found here.

Materials needed

To follow along with this article, you will need:

  • Your running k3s cluster
  • A working docker registry
  • docker installed on your development PC

The files and source code used in this project are available as downloadable resources.

Why build custom images?

Why would we want to build custom images? There are several scenarios where this would be useful. One common one would be adding custom content to an existing image or service. An example of this would be adding static HTML content to an nginx HTTP server and deploying that as a custom image. The content could be some static HTML and CSS for a static website. Or, it could be something like a single page React application. The resulting combination would be a deployable dedicated web server with custom content.

Another scenario might be deploying a completely custom application. An example of that might be deploying a custom back end API service that you've built on top of Node, or Python, or Ruby, or something like that.

A third scenario might be just building a common image for the ARM platform because no one else has made it available.

Challenges for an ARM platform

Deploying on an ARM platform, like the Pi, presents a few extra challenges for us to overcome. The first is that most docker images are built for the amd64 platform. The situation today is better than it was a few years ago, but the amount of pre-built ARM images is still small. How small?

If we go to Docker Hub and look at the available images, we can see there are over 3 million images. If we limit the list to just the ARM architecture (e.g. select ARM under Filters), we see there are only about 32 thousand images, two orders of magnitude lower! This greatly increases our chance of needing to build a custom ARM image for some service we want to use at some point.

Another challenge is building the images themselves. Most home PCs I've seen are also amd64-based. By default, if we build an image on our PC it's only going to run on amd64 architecture. So what do we do? There are two workarounds that come to mind.

First, we could just natively build our image on a Pi. This is fairly straight forward. We set up a Pi for development use. Put docker on it. Copy our code over, build it with docker and push to our registry. In my experience, though, this tends to be slow. Also, it's a bit of pain developing on a PC and having to sync everything to a Pi to build the images. It does work however and is completely valid if you want to do it that way.

Another option is to build directly on our PC by using an ARM emulator. That's what were going to focus on in this article.

To build our images on our PC we are going to use the qemu emulator. Let's go ahead and install it on our development PC.

sudo apt install -y binfmt-support qemu-user qemu-user-static

Previously, we talked about three major reasons to build our own images. We are going to deep dive into the first two in this video. Let's start with scenario one -- adding content to an existing image.

Scenario 1: Adding content to an existing image

The first step, obviously, would be to create some content. Let's say a React client application we wanted to deploy. I've pre-created such an application that we can use for demonstration purposes. It's a super novel strategy game in which two players try to be the first to line up four games pieces in a row on a grid. I can't believe no one has thought of this before...

Let's first make a directory to do our custom work in.

mkdir custom_images
cd custom_images

Let's clone the pre-made application and select version v1.0.0 as the code we want to deploy...

git clone https://gitlab.com/carpie/line-up-four
cd line-up-four
git checkout v1.0.0

Since this is a React application, we need a Node JS environment in which to build it. Since I'm not trying to teach you JavaScript development, I've set up a build system in docker that we can use to build the application. That way we don't need to set up our development box to do JavaScript development. Obviously, if you were generating your own content, you would develop it in what ever environment you saw fit. These next steps are only for building the application and have no bearing on deploying on our cluster.

docker build -f Dockerfile.build -t four-builder .
docker run --rm --user `id -u` -v $(pwd):/app four-builder:latest

The first command builds an image named four-builder using the Dockerfile.build docker file. This builds an image that has Node in it and can build our application.

The next command runs the image, as our user, mapping our current working directory to the path /app in the docker container (which is where the image expects the code to build to live). The image then installs packages and builds the React application with the output being magically deposited in our local build/ directory.

Again, all that is to save us from having to set up for building JavaScript. If you learned something from it, great! But it if not, is has no bearing on what we are trying to accomplish here, so don't worry about it.

When it is finished running, if we inspect the build directory, we can see our application code (index.html, some .js files, etc).

$ ls build/
asset-manifest.json  logo512.png                                            robots.txt
favicon.ico          logo.svg                                               service-worker.js
index.html           manifest.json                                          static
logo192.png          precache-manifest.276fe34bb18dc69228a5b69b6b5d4ce6.js

Creating a deployable image

Now that the application is built we can work on building it in to an image we can deploy. I've pre-created the Dockerfile, but let's look at it. I've named it Dockerfile.deploy to separate it from the one we used for building.

FROM arm32v7/nginx:latest
COPY qemu-arm-static /usr/bin/
COPY ./build/ /usr/share/nginx/html/

The first line specifies the base image to use, in this case the ARM version of nginx.

The second line is the magical line. It copies the qemu emulator into the image itself. This provides us two benefits. First, it allows use to use RUN commands inside the docker file to perform operations on an ARM image. We are not doing that in this docker file, but we will later. Second, once the image is built, this will allow us to run the ARM image on our amd64 PC for testing purposes.

The third line copies our built application code to nginx's root directory for serving HTML. That's all there is to it!

docker does not allow us to access files outside our working directory when building images, so we need to copy that qemu binary into our working directory before we build the image.

cp /usr/bin/qemu-arm-static .

Now let's use this docker file to build our custom nginx image.

docker build -f Dockerfile.deploy -t docker.carpie.net/line-up-four:v1.0.0 .

Remember when we name our image we need to prefix it with our docker registry name so it will go to the correct place when we push it. Also, we tag the image with the version number v1.0.0. We'll see why that's a good idea in just a bit.

Before pushing, let's test our image locally. It's an ARM image, but the emulator will allow us to test it locally!

docker run -it --rm -p 3000:80 docker.carpie.net/line-up-four:v1.0.0

You may see some errors about things not being implemented. This is because the emulator doesn't emulate everything one can do in the ARM architecture. So far this hasn't caused me any issues other than annoying messages.

We mapped local port 3000 to image port 80, so we should be able to visit localhost:3000 in our browser to try out the image.

Local test working

Our app is working locally. Sweet!

Now we just press Ctrl-C in the terminal to stop the image.

Let's push the image to our registry.

docker login docker.carpie.net
docker push docker.carpie.net/line-up-four:v1.0.0

We log in first just in case. If you are already logged in, it won't prompt you for user name and password. After login, we just push the image up to our registry like any other image.

Creating the Kubernetes configuration for our image

Now we are ready to make the Kubernetes configuration file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: four-nginx
  labels:
    app: four-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: four-nginx
  template:
    metadata:
      labels:
        app: four-nginx
    spec:
      containers:
      - name: four-nginx
        image: docker.carpie.net/line-up-four:v1.0.0
        imagePullPolicy: Always
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: regcred
---
apiVersion: v1
kind: Service
metadata:
  name: four-nginx-service
spec:
  selector:
    app: four-nginx
  ports:
    - protocol: TCP
      port: 80

Let's look at the important parts. We call the container four-nginx. The image needs to be the custom image we just pushed, docker.carpie.net/line-up-four:v1.0.0. You may notice a new section, imagePullSecrets. This tells Kubernetes to use the secret named regcred to authenticate with the docker registry when pulling images. Since we've set up authentication on our docker registry, we need this to give Kubernetes access to it when deploying our image. Let's go ahead and create that secret.

I'm going to give a bonus bash tip here. We are going to run a command that will have our authentication password on the command line. We don't want this in our bash history. In bash, if you start a command with a space, it won't be saved in history. Let's make use of that here.

 kubectl create secret docker-registry regcred \
--docker-server=docker.carpie.net --docker-username=registry \
--docker-password=your_reg_password

This verbose command creates a Kubernetes secret of type docker-registry, named regcred intended for our registry with our registry's username and password.

Now if we dump the last three history commands...

$ history 2
 1546  vi four.yaml
 1547  history 2

You can see there is no clear text password leaking in our history file! Just a little bonus tip there!

Now that the regcred secret is created, since we specified it as the imagePullSecrets name, Kubernetes will be able to log in to our registry to pull images.

Deploying our image

Now we're ready to deploy our application. I'm sure you know how to do that by now!

kubectl apply -f four.yaml

We'll give that some to spin up and check that it is running.

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
docker-registry-6748fd66d6-mskkt   1/1     Running   7          142d
mysite-nginx-679b7ff485-6xpc9      1/1     Running   8          147d
four-nginx-5b649489bd-t82jm        1/1     Running   0          24s

Great! It's running!

What about Ingress?

We can't check the site out just yet because, as you may have noticed, I didn't specify an IngressRoute record in our configuration. We have a couple of options here. We could deploy this on k3s.carpie.net with a path of /four. That would work. We could also deploy on a new host, say four.carpie.net and get a new certificate and all that. I think for this app, we'll just deploy it on a new path on k3s.carpie.net.

We could put a whole ingress section in our yaml file for k3s.carpie.net and that would work, but generally speaking I don't like having multiple ingress records for the same site. So instead of putting the ingress record in this file, we'll update the mysite.yaml for k3s.carpie.net to add the path for this service.

Let's edit mysite.yaml and add this section just below the existing - match: Path(`` / ``) section.

  - match: PathPrefix(`/four/`)
    kind: Rule
    services:
    - name: four-nginx-service
      port: 80
    middlewares:
    - name: four-stripprefix

Here's the whole section for context.

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: mysite-nginx-ingress-secure
spec:
  entryPoints:
    - websecure
  routes:
  - match: Path(`/`)
    kind: Rule
    services:
    - name: mysite-nginx-service
      port: 80
  - match: PathPrefix(`/four/`)
    kind: Rule
    services:
    - name: four-nginx-service
      port: 80
    middlewares:
    - name: four-stripprefix
  tls:
    secretName: k3s-carpie-net-tls

One other thing here. When we developed our four application, it didn't know that it would be deployed in a sub-path of a site. So, let's tell traefik to strip the leading /four off when passing traffic to this application. We can do that by adding a middleware. You may have noticed that we called one out in the middleware section above. Let's define it now

Just below the IngressRoute section, add the middleware record

---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: four-stripprefix
spec:
  stripPrefix:
    prefixes:
      - /four

Now, we just reapply the mysite.yaml configuration.

kubectl apply -f path_to/mysite.yaml

Easy enough! Let's try it out. We can now go to https://k3s.carpie.net/four in our browser to see the application.

Version 1.0.0 deployed!

And there is version 1.0.0 of our application! Our application deploy is complete!

Iterating

Ok, so our highly unique application gets out there, goes viral, and we now have 30 million players playing the game against themselves. But half of them are on social media griping that it's ugly. So we hack on some CSS and we need to quickly rush out version 1.1.0...

We'll simulate our application development by simply checking out version 1.1.0 from our repo.

git checkout v1.1.0

Now, we just repeat our build and deploy steps from before.

docker run --rm --user `id -u` -v $(pwd):/app four-builder

This will build version 1.1.0 of our application. Now we build a new image with a new version label and push to our repository.

docker build -f Dockerfile.deploy -t docker.carpie.net/line-up-four:v1.1.0 .
docker push docker.carpie.net/line-up-four:v1.1.0

Now we edit our yaml file to use the new version.

        image: docker.carpie.net/line-up-four:v1.1.0

And redeploy.

kubectl apply -f four.yaml

Let try that out by going back to https://k3s.carpie.net/four. If you see that it's still at version 1.0.0, this is because our browser has it cached. Clear the cache by doing a hard reload with Ctrl-Shift-R.

Version 1.1.0 deployed!

There we go! Version 1.1.0. Looks much nicer!

Bonus: Live deployment editing

Here's a bonus tip for you. Let's say our 30 million game players love the new look but there's a critical bug that crashes right before a player wins. We are in a panic and need to roll back right away!

In that case, we can quickly live edit the deployment!

kubectl edit deployment four-nginx

This will bring up Kubernetes' view of the deployment yaml in our editor. We simply go down the container's image line, edit the v1.1.0 back to v1.0.0 and save the file. Kubernetes will immediately redeploy our pod with version v1.0.0!

To go back again, we could live edit again, or just redeploy our configuration with kubectl apply.

That's it for scenario 1! We can update site data or code at will and deploy new versions whenever we like. Because we tagged out images with version numbers, we can roll back to any version easily. Nice!

Cleanup

If we want to clean the sample of our our cluster, we just run kubectl delete on our configuration file.

kubectl delete -f four.yaml

In the next article

In the next article, we will explore building custom images further by digging in to scenario 2 -- building images from scratch!

Thanks for reading!