In addition to this site, I have personal projects that require web hosting but as I have neither the reliability or availability requirements for multiple nodes, I run a single VPS for all of them. I’ve been experimenting with Docker lately, with a view to containerising these projects in order to simplify deployment and maintenance. The driving factors behind this are a lethal combination of laziness and curiosity.

The way things were

This is an approximation of my previous set up, using a single node to serve different content to two domains:

Original setup - nginx accepts web requests and serves either static content or routes to service running on gallery

The points of note are:

  • single node serving multiple domains
  • nginx listens on port 80
  • webapp listens on port 12345 localhost only
  • domain 1 gets static content
  • domain 2 requests are proxied to webapp

The webapp is a Scala Spray app that compiles to a jar with an embedded Jetty server, but it might as well be Rails or Node js. In any case, the setup works fine and the nginx configuration is simple. So what’s wrong with it? Why bother using Docker at all?

Let’s start with what’s wrong with it:

  1. There is a dependency on the server to run a specific version of nginx and sbt.
  2. Deployments for nginx and the webapp are heterogeneous processes which adds complexity and therefore time.

What does Docker provide that counteracts these problems?

Linux Containers

With a limited knowledge of virtualisation, my understanding had been that virtualisation === virtual machines. This is incorrect - Docker uses one of the many other forms of virtual environment: the Linux Container. These containers maintain resource isolation from the host environment but are very lightweight and present very little in the way of overheads.

It’s possible then to package an individual application in a container that includes just the dependencies that application requires, and deploy it anywhere that runs Docker. This means two applications could use different versions of say, Ruby, without having to worry about a complicated setup on the host machine.

In addition, there is no messing about with different Linux distributions - the applications run the same on Debian as they do as Gentoo because they only know about the resources of the container.

Dockerfiles and the Docker client

We’ve used a vague definition of ‘Docker’ so far so let’s be a little more specific:

  • A Docker image is defined by a Dockerfile.
  • A Docker container is an instance of a Docker image.
  • The Docker server is responsible for building Docker images and running Docker containers.
  • The Docker client is used to interact with the Docker server.

Dockerfiles provide a single language for preparing the environment an application needs to run. This homogeneity simplifies deployments for different kinds of applications because it reduces the complexity. You don’t have to worry about deployment scripts for your applications because that’s been taken care of when constructing the Docker image. Similarly, the Docker client is the single interface for orchestrating deployments which also helps reduce complexity.

The way things could be

I wanted to move to a setup where both nginx and the webapp ran in Docker containers. Requests to port 80 would be forwarded to the nginx container whose nginx instance would continue to be responsible for routing as well as serving static content. The webapp would listen to port 12345 but only on localhost, preventing it from being reached by any means other than via nginx.

While nginx would be containerised, I still wanted the option of modifying its configuration and static content from the host. This meant that its Dockerfile had to be capable of mounting a volume whose source came from the host file system. This is what the new architecture looked like:

Updated setup - nginx runs in a Docker container, accepts web requests and serves either static content or routes to service running in another Docker gallery

I began by defining a Dockerfile for the webapp. As mentioned previously, it’s a Scala webapp that compiles to a jar and runs with an embedded Jetty server. This proved an easy target for a Dockerfile - install Java and execute the jar file. I fired it up using the Docker client, listening on localhost:12345 and it worked perfectly, accepting requests from curl and returning the expected content.

Next up was nginx. nginx provide an official image on Docker Hub and I began by using this. The first test was to get it running serving static content and the official image worked well for this purpose. I didn’t need to define a custom Dockerfile for nginx, but instead used the Docker client to run a container from the official nginx image. With this image I was able to mount a folder from the host and amend the existing nginx config for the static content site so that it was mounted along with the static content. Great so far.

Putting it all together

nginx still needed to route traffic for the second domain to the webapp. The former configuration relied on pointing requests for this domain to localhost port 12345, and I needed to set up the Docker equivalent of this. The webapp’s IP address would be a Docker provided one rather than localhost so I needed to get this information to nginx somehow. I could hard code it, but the IP address might change if the container restarted.

Wouldn’t it be great if rather than having to pass the webapp’s IP port to nginx manually, I could just tell the nginx container that another container called “webapp” existed and it would magically get hold of webapp’s IP address and port itself? Well I did exactly that using linked containers. In this case, specifying the webapp as a linked container when starting up nginx created environment variables with the webapp container’s IP address and port.

There was one final hurdle: how to get the nginx configuration to make use of the environment variables. Unfortunately, there is no out of the box solution for this, but a bit of research led me to an existing solution in the form of an alternative nginx Docker image called nginx-template-image. This image included a script that looked for nginx config files ending in .templ and replaced any environment variable placeholders with their actual value, saving the result as a .conf file that nginx would then use.

This was the last piece in the puzzle and allowed me to achieve the new architecture. I wrote some scripts to easily start, stop and restart the two containers and this is the configuration in which my website and the anonymous webapp are now running. The beauty of it is the uniformity of how the services are deployed and maintained, as well as how easy it is to add other applications in the future.