Running Docker Swarm on ARM boards

Rui Ni
14 min readMar 12, 2019

--

Two CubieBoard2s at their newest

Few years ago, I acquired two CubieBoard2 A20 ARM32 boards. I used to put my code on it to run.

It was fun at beginning as now I have two devices that can execute my code 24/7 in a power saving fashion. But not long after, I found it’s really a pain to keep everything on it well-maintained — Change one software crashes something else, a simple apt-get update could cost a complete re-flash.

So, after few months of “fun”, they become idle and eventually been teared down completely.

Time passed-by. Few days ago, I re-discovered them sealed inside the box they originally came with, almost brand new. Revisit my old problem, a bell in my head start to ring — How about run Docker on them? I can run my code on a Container, that way I can continue my old “fun” while keeping all those old headaches away. What a great idea!

After many hours of Googling and trying, I finally made them up and running again.

$ docker --tlsverify --host cubie0 ps
CONTAINER ID IMAGE ...
516fcf01823f cubie0:5000/swa...
e87b46dd770b registry:2 ...

So, here is what I’ve learned during that. After reading this, you will know how to:

  • Install Docker, of course.
  • Initialize Docker Swarm.
  • Deploy scalable service on the Docker Swarm.

Part 1: Install Docker

A quick Google search bought me https://github.com/alexellis/docker-arm and he’s blog Get Started with Docker on Raspberry Pi. Both the repository and blog is very helpful. In fact, many command and examples that I used here is straight from the there. So please go read them up.

This part is very straight forward. To install Docker on all of them, all we need to do is to execute

$ curl -sSL https://get.docker.com | sh

Docker should be installed like it’s on a normal x86 machine.

After Docker has been installed, it needs to be enabled and started by executing following commands:

$ sudo systemctl enable docker
$ sudo systemctl start docker

And then, if you want to execute docker command without root privilege, execute

$ sudo usermod -aG docker <your_user_name>

Or if you want to operate Docker using your current account without root, just

$ sudo usermod -aG docker $(whoami)

This will add your Linux user account to Docker user group. To verify that, use command getent group, you should see something similar to:

$ getent group | grep docker
docker:x:999:<your_user_name,other_users_name_maybe>

Docker should be up and running at this point, verify that by using ps -aux command. The output should look like following

$ ps -aux | grep dockerd
root ... /usr/bin/dockerd -H fd:// --con...

And don’t forget to install Docker on your local machine as well. We will use Docker client to access those servers after.

Part 2: Enable remote access to the Docker server

Apparently, I don’t want to login to any of those two boards every time when I want to use Docker server. The new life that I’m expecting to live, is to write my code and Dockerfile on my local machine, then deploy them onto the remote Dockers. No ssh to the remote and all that login and scp nonsense.

So we have to enable the remote access to the Docker.

The first step of this, is actually to choose which one of them is going to be the Docker Swarm manger. It’s usually the faster and stabler one. In my case, it’s the board been named cubie0, because it got Ethernet connection, the other one, namely cubie1 is only WIFI’ed on.

Now we have our manger selected, time to get it initialized.

Part 2.1: Generating self-signed certificates for Docker TLS authentication and Registry

This part is basically a recap of the official document Protect the Docker daemon socket.

Docker does remote access authentication through TLS. The certificate used by both the server and the client during TLS handshake must be issued/signed by the same CA and support client/server auth extension. Because of this, we first need to create a CA by issuing a public & private key pair.

Login to the manger, cubie0 in my case. And then, create the directory structure needed to store all the certificates that will be generated later by executing command:

$ sudo mkdir /etc/docker/certs -p
$ sudo mkdir /etc/docker/registry -p

And then enter there by using command

$ cd /etc/docker/certs

Now, start to generate the private key for our CA by

$ openssl genrsa -out ca-key.pem 4096
Generating RSA private key, 4096 bit long modulus
..........................
..........................
......................++++
..........................
......................++++
e is 65537 (0x010001)

Note that you can encrypt the private key by set the -aes256 flag during key generation process. If you did, openssl will ask you for the password before saving and later use the key.

So we got the private key, now for the public one (the certificate).

$ sudo openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
Enter pass phrase for ca-key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:CN
State or Province Name (full name) [Some-State]:Anhui
Locality Name (eg, city) []:Tongling
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Rui NI Private Docker Swarm CA
Organizational Unit Name (eg, section) []:R&D
Common Name (e.g. server FQDN or YOUR name) []:cubie0
Email Address []:ranqus@gmail.com

Noted the Common Name or CN above, it must be the DNS name/IP or access address of our server. cubie0 in my case.

OK, we got ourselves a CA. Now for the server. The procedure is very similar to what we did during the CA one, but without -x509 flag. Meaning we’re generating a Certificate Signing Request rather than the certificate itself.

The command will be:

$ sudo openssl genrsa -out server-key.pem 4096
$ sudo openssl req -subj "/CN=cubie0" -sha256 -new -key server-key.pem -out server.csr

Remember to change the cubie0 accordingly.

Then, create server-extfile.cnf file. This file will be used to provide additional configuration during the signing process, for example, adding subjectAltName and extendedKeyUsage to our final certificate.

$ echo subjectAltName = DNS:cubie0,DNS:cubie0.local | sudo tee server-extfile.cnf
$ echo extendedKeyUsage = serverAuth | sudo tee -a server-extfile.cnf

After that, we can sign the server.csr with our CA to get the server certificate

$ sudo openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile server-extfile.cnf

To make sure the newly generated server-cert.pem is correct, we can use following command and inspect the result (key part: CN should be cubie0; SAN and EKU should be available)

$ openssl x509 -text -noout -in server-cert.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
f0:f3:65:6c:6d:47:f7:3b
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = CN, ST = Anhui, L = Tongling, O = Rui NI Private Docker Swarm CA, OU = R&D, CN = cubie0, emailAddress = ranqus@gmail.com
Validity
Not Before: Mar 9 13:17:18 2019 GMT
Not After : Mar 8 13:17:18 2020 GMT
Subject: CN = cubie0
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:f2:b9:bb ...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:cubie0, DNS:cubie0.local
X509v3 Extended Key Usage:
TLS Web Server Authentication

Signature Algorithm: sha256WithRSAEncryption
85:fb:24 ...

Next, the key pair for our Docker client. Basically the same deal, but without the SAN, and the EKU is now set to clientAuth.

$ sudo openssl genrsa -out key.pem 4096
$ sudo openssl req -subj '/CN=client' -new -key key.pem -out client.csr
$ echo extendedKeyUsage = clientAuth | sudo tee client-extfile.cnf
$ sudo openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile client-extfile.cnf

And finally, the certificates needed for the Docker Registry that we will setup later.

$ sudo openssl genrsa -out ../registry/registry-key.pem 4096
$ sudo openssl req -subj '/CN=cubie0' -new -key ../registry/registry-key.pem -out ../registry/registry.csr
$ sudo openssl x509 -req -days 365 -sha256 -in ../registry/registry.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ../registry/registry-cert.pem

Now, if you’re reading this article along with the Docker Document, at this point the document will ask you to remove the .csr and .cnf files. Do that if you wanted to, but I will keep the .cnf ones. Because I’m very sure I will be using them one year later, that is when those certificates expires of course.

Which left us with:

$ ls -l
total 36
-rw-r--r-- 1 root root 3326 Mar 9 08:40 ca-key.pem
-rw-r--r-- 1 root root 2155 Mar 9 09:09 ca.pem
-rw-r--r-- 1 root root 17 Mar 9 13:33 ca.srl
-rw-r--r-- 1 root root 1887 Mar 9 13:33 cert.pem
-rw-r--r-- 1 root root 30 Mar 9 13:32 client-extfile.cnf
-rw-r--r-- 1 root root 3243 Mar 9 13:31 key.pem
-rw-r--r-- 1 root root 1931 Mar 9 13:17 server-cert.pem
-rw-r--r-- 1 root root 75 Mar 9 13:37 server-extfile.cnf
-rw-r--r-- 1 root root 3243 Mar 9 13:03 server-key.pem
$ ls -l ../registry/
total 12
-rw-r--r-- 1 root root 1846 Mar 9 13:33 registry-cert.pem
-rw-r--r-- 1 root root 3243 Mar 9 13:33 registry-key.pem

Here, optionally, we can create directory structure /etc/docker/certs.d/cubie0:5000/ and then copy or link the ca.pem into it as ca.crt.

$ sudo mkdir /etc/docker/certs.d/cubie0:5000/ -p
$ cd /etc/docker/certs.d/cubie0:5000/
$ sudo ln -s /etc/docker/certs/ca.pem ca.crt
$ ls -l
total 0
lrwxrwxrwx 1 root root 24 Mar 9 14:44 ca.crt -> /etc/docker/certs/ca.pem

This may become useful if later we try to access the Docker Registry fromcubie0.

Now, we need to download ca.pem, cert.pem and key.pem to our local machine, then copy the downloaded ca.pem to /etc/docker/certs.d/cubie0:5000/ and rename it to ca.crt.

On your local machine, execute:

$ mkdir ~/.docker -p
$ scp myuser@cubie0:/etc/docker/certs/{ca,cert,key}.pem ~/.docker
$ sudo mkdir /etc/docker/certs.d/cubie0:5000/ -p
$ sudo cp ~/.docker/ca.pem /etc/docker/certs.d/cubie0:5000/ca.crt

Once everything is settled, we can then do some optional clean ups on cubie0.

The cert.pem and key.pem is no longer needed there thus can be removed. And don’t forget to set the permission of those -key.pem files so they cannot be stolen. Here is what we should end up with:

$ sudo rm cert.pem key.pem
$ sudo chmod 0400 *-key.pem ../registry/*-key.pem ca.srl
$ ls -l
total 28
-r-------- 1 root root 3326 Mar 9 08:40 ca-key.pem
-rw-r--r-- 1 root root 2155 Mar 9 09:09 ca.pem
-r-------- 1 root root 17 Mar 9 13:33 ca.srl
-rw-r--r-- 1 root root 30 Mar 9 13:32 client-extfile.cnf
-rw-r--r-- 1 root root 1931 Mar 9 13:17 server-cert.pem
-rw-r--r-- 1 root root 75 Mar 9 13:37 server-extfile.cnf
-r-------- 1 root root 3243 Mar 9 13:03 server-key.pem
$ ls -l ../registry/
total 12
-rw-r--r-- 1 root root 1846 Mar 9 13:33 registry-cert.pem
-r-------- 1 root root 3243 Mar 9 13:33 registry-key.pem

Part 2.2: Configure Docker to listen on TCP interface with TLS Verify enabled

After all the certificates are ready, we can now ask Docker server to listen on a public port.

To do that, first, create directory structure /etc/systemd/system/docker.service.d/ and a file inside it, name startup_options.conf:

$ sudo mkdir /etc/systemd/system/docker.service.d/ -p 
$ sudo vim /etc/systemd/system/docker.service.d/startup_options.conf

Put following content into the file:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/certs/ca.pem --tlscert=/etc/docker/certs/server-cert.pem --tlskey=/etc/docker/certs/server-key.pem --host fd:// --host tcp://0.0.0.0:2375

Notice the empty ExecStart= line? It’s intentional, don’t remove it.

Then save the file and quit vim without shutdown the computer. Reload systemmd daemon and restart Docker server by

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker.service

How Docker should be up and running again.

To verify everything is working as expected, we can execute following command on our local machine:

$ docker --tlsverify --host cubie0 info

If everything is good, it should return the information about the manager node.

OK, it’s time to setup the Swarm.

Part 3: Enable Docker Swarm mode

On the cubie0 or your Docker manager, enter command

$ sudo docker swarm init --advertise-addr eth0

The eth0 is the available Ethernet interface of my cubie0. After few seconds it should prompt you with message like this:

Swarm initialized: current node (pa9369....) is now a manager.To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-077.... 10.220.179.115:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Now login to other board, or, the worker node, cubie1 in my case, which should have Docker installed and running during part 1 of this article, execute command:

$ sudo docker swarm join --token SWMTKN-1-077.... cubie0:2377
This node joined a swarm as a worker.

Note that we didn’t 100% follow the order, as we are using the DNS name rather than the IP address.

And it’s done. Simple, right?

While you’re on cubie1, you can also optionally get a copy of previously created ca.pem, and it put into /etc/docker/certs.d/cubie0:5000/.

$ sudo mkdir /etc/docker/certs.d/cubie0:5000/ -p
$ sudo scp myuser@cubie0:/etc/docker/certs/ca.pem /etc/docker/certs.d/cubie0:5000/ca.crt

Let’s recap what we’ve got here so far:

  • We have two devices running Docker Swarm.
  • One of them called cubie0, which is the manager.
  • Another one called cubie1, which is the worker.
  • The manager cubie0 can be remotely accessed through docker command from our local machine. The worker cubie1 can’t.
  • cubie1 only take orders from cubie0, and it should not be operated upon directly.

Part 4: Run Docker Registry on the manager node

Docker Registry is a service which been used to distribute Docker Images. We need it here because so far this is the only official way to distribute Docker Images across different nodes.

Sometimes, not every node in our Docker Swarm has the Image needed to build Containers. When that happens, we can tell the node to automatically download the missing Images from our Registry rather than having to manually upload the Image to the node ourselves.

It also worth noting that the Docker Registry is not a part of the Docker docker/dockerd application. In fact, it is a completely separate project, and it can run as an independent HTTP server. Treat it like any other Container you could put on your Docker rather than a part of Docker installation itself.

Here, we’ll run Docker Registry on our cubie0 manager node as a normal Docker Container using the official Registry Image. For more detail you can read the document Deploy a registry server.

First, login to cubie0, download the Image and generate htpasswd file needed for access restriction, with one command:

$ sudo docker run --name registry --rm --entrypoint htpasswd registry:2 -Bbn "docker-registry-user" "docker-registry-user-password" | sudo tee /etc/docker/registry/htpasswd
Unable to find image 'registry:2' locally
2: Pulling from library/registry
6a2a63c54ac7: Pulling fs layer
....
f674eec89ecb: Pull complete
Digest: sha256:3b00e5438ebd8835b...
Status: Downloaded newer image for registry:2
docker-registry-user:$2y$05$R.FtgAlvbXHMOplTVMB...E2m

This will download the Image of Docker Registry, create a Container with name registry, and use the Container to generate a new user docker-registry-user with password docker-registry-user-password, then save it to /etc/docker/registry/htpasswd, which will be used to setup HTTP Basic Authentication of our Registry service, because we don’t want to give everybody permission to write Image into it.

The registry Container we just created should be automatically exited and removed after the command is done. If it’s not, we can manually stop it, then remove it.

$ sudo docker container stop registry
$ sudo docker rm registry

OK, we have the htpasswd setup and Registry Image downloaded, now we can start our real Registry.

$ sudo docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-e "REGISTRY_STORAGE_DELETE_ENABLED=true" \
-v /etc/docker/registry:/certs \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Rui NI's Private Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/certs/htpasswd" \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem" \
registry:2

That’s it, done!

If we trying to open https://cubie0:5000/v2/ with a web browser, we should be prompted with a dialog box asking for our username and password. You can login using the username and password generated previously to confirm everything is working alright.

Notice the following parameters:

-v /etc/docker/registry:/certs

Which tells Docker to mount /etc/docker/registry to the Registry Image so it can access htpasswed file as well as the certificates that we generated during part 2.1.

Also, if one Registry user is not enough for you, you can now run

$ sudo docker exec -it registry htpasswd -Bbn "new-docker-registry-user" "new-docker-registry-user-password" | sudo tee -a  /etc/docker/registry/htpasswd

In order add more users.

To login to our Registry, execute following command on your local machine:

$ docker login cubie0:5000
Username: docker-registry-user
Password:
WARNING! Your password will be stored unencrypted in /home/nirui/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

Part 5: Deploy demo application to our Swarm

As everything has been setup, we can now finally enjoy our new Docker. So let’s create a demo application and put it onto the Swarm.

The demo application used here is basically came from Alex Ellis’s another blog Scale a real microservice with Docker 1.12 Swarm Mode, with a little twist so it can work on our ARM32 boards.

On our local machine, create a directory, name it swarm-mode-guid.

$ mkdir swarm-mode-guid
$ cd swarm-mode-guid

Then initialize the project by

$ npm init -y
$ npm i --save uuid express

This will create and download necessary components for our application.

Next, create a file, name it app.js, and put the code that Alex Ellis uses into it (You’ll have the code if you read he’s blog).

For the Dockerfile, we’ll use following

FROM arm32v6/node:lts-alpine
ADD ./package.json ./
RUN npm i
ADD ./app.js ./
EXPOSE 9000
CMD ["node", "./app.js"]

We also need a docker-compose.yml file, put following content into it

version: '3'services:
web:
image: cubie0:5000/swarm-mode-guid
ports:
- "9000:9000"

Finally, we can start the deploy work flow.

The first step is to ask our manager to build the Docker Image out of Docker file for us. On our local machine, execute:

$ docker --tlsverify --host cubie0 build --tag swarm-mode-guid .
Sending build context to Docker daemon 1.983MB
Step 1/6 : FROM arm32v6/node:lts-alpine
---> d29199806c63
Step 2/6 : ADD ./package.json ./
---> 2738df3a1c4b
Step 3/6 : RUN npm i
---> Running in c0e2884344f8
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN swarm-mode-guid@1.0.0 No description
npm WARN swarm-mode-guid@1.0.0 No repository field.
added 49 packages from 40 contributors and audited 122 packages in 29.823s
found 0 vulnerabilities
Removing intermediate container c0e2884344f8
---> 9262e809b9cc
Step 4/6 : ADD ./app.js ./
---> 9e3803edc8af
Step 5/6 : EXPOSE 9000
---> Running in 193a12bcd354
Removing intermediate container 193a12bcd354
---> a59f298461ea
Step 6/6 : CMD ["node", "./app.js"]
---> Running in 3f506a73c1c1
Removing intermediate container 3f506a73c1c1
---> e0ddfdfed717
Successfully built e0ddfdfed717
Successfully tagged swarm-mode-guid:latest

We can confirm the image has been successfully built by execute:

$ docker --tlsverify --host cubie0 images
REPOSITORY TAG IMAGE ID ...
swarm-mode-guid latest e0ddfdfed717 ...
registry 2 c99846f41d25 ...

Next, we need to ask the manager to tag and push the image onto our Registry:

$ docker --tlsverify --host cubie0 tag swarm-mode-guid cubie0:5000/swarm-mode-guid
$ docker --tlsverify --host cubie0 push cubie0:5000/swarm-mode-guid
The push refers to repository [cubie0:5000/swarm-mode-guid]
e6472c1ad1e6: Pushed
738e5e83ff91: Pushed
a1b58ddbf133: Pushed
144c8b2fd231: Pushed
91570bff57a5: Pushed
91b223746bb0: Pushed
latest: digest: sha256:b2b5de7625d8d7d1... size: 1576

And finally, we can start to deploy

$ docker --tlsverify --host cubie0 stack deploy --with-registry-auth --compose-file docker-compose.yml swarm-mode-guid-stack
Creating network swarm-mode-guid-stack_default
Creating service swarm-mode-guid-stack_web

Success! The service should be accessible now

$ curl cubie0:9000/guid
{"guid":"97a82d59-f3fd-49f4-93df-d35113b8111b","container":"7633dce83e4b"}

You can see the information of the service here by

$ docker --tlsverify --host cubie0 service ls
ID NAME MODE
d6c53c2zwj61 swarm-mode-guid-stack_web replicated ...
$ docker --tlsverify --host cubie0 service ps swarm-mode-guid-stack_web
ID NAME IMAGE NODE
q5jeqsh3kok1 swarm-m... ... cubie0 ...

Then, to scale it to 2 nodes, just

$ docker --tlsverify --host cubie0 service scale swarm-mode-guid-stack_web=2
swarm-mode-guid-stack_web scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [===========================>]
2/2: running [===========================>]
verify: Service converged

And now we can see the service is running on both cubie0 and our cubie1.

$ docker --tlsverify --host cubie0 service ps swarm-mode-guid-stack_web
ID NAME IMAGE NODE
q5jeqsh3kok1 swarm-mo..1 ... cubie0
7wmq0k37midx swarm-mo..2 ... cubie1

Part 6: Some additional maintenance information you might want to know

With all that Images and Services running and altering on our Swarm, it is very important to keep everything clean, otherwise we will end up filled by unused Image and Containers.

The maintenance of Docker is mainly done by using prune commands, which can be used to remove unused objects from Docker. For more information, checkout the pure document.

The prune command also enables us to preform maintenance automatically, as this StackOverflow question has discussed.

On the other hand, the Registry could be a bit of a hard cookie to bite into. I didn’t found a good solution to automatically clean it so far. The closest one I’ve found is this anwser on StackOverflow.

Conclusion

I have no conclusion on this.

--

--

Rui Ni
Rui Ni

No responses yet