26.8.16

Blue-Green Deployment with Docker and Nginx



Here at Klokan Technologies, we mostly develop software services and we run them as Docker containers. To release new versions, we use the blue-green deployment technique. This reduces downtime and limits the impact of configuration errors.
The basic idea is simple. Instead of having just one production environment, there are two identical ones: blue and green. They take turns actively running the service. A new version is deployed to the environment that is not currently active. Only after the service successfully starts up there and passes all health checks is that environment marked as active. The old active environment is then stopped and prepared to receive the next deployment.
The switch from one environment being active to the other one is a crucial step. If it’s not atomic, there will be a short period where neither environment is active and any request coming in at this time will fail. We use Nginx for this purpose because it does have this capability and because we have it as a reverse proxy anyway.

Example

To illustrate the whole process, let us consider an example. We will deploy a Python application in a file called hello.py, listed below. It listens for HTTP requests on port 9000 and responds with a greeting, providing a simple service.

$ cat hello.py
from wsgiref.simple_server import make_server

GREETING = b'Hello, world!\n'

def hello(environ, start_response):
        start_response('200 OK', [('Content-Type', 'text/plain')])
    return [GREETING]

make_server('0.0.0.0', 9000, hello).serve_forever()

We will have one container for the application and one for Nginx. We will put both on the same Docker network, so that they can see each other. In our production environment, we have separate backend and frontend networks, but it is not necessary here. We will use official Docker images.

$ docker network create example
$ docker pull nginx
$ docker pull python:3


First we start the application itself in the BLUE environment. Notice that we name the container hello-BLUE.

$ docker run \
    --name hello-BLUE \
    -v $(pwd)/hello.py:/usr/local/src/hello.py \
    --net=example \
    -d \
    python:3 \
    python /usr/local/src/hello.py

Then we put the application configuration for Nginx into a file called nginx-conf.d/hello.conf. It sets up an HTTP proxy that forwards all requests to the application. Here again, we have to refer to the application container as hello-BLUE.

$ cat nginx-conf.d/hello.conf
server {
    listen 80;
    location / {
            proxy_pass http://hello-BLUE:9000;
    }
}

Now we can start Nginx. It will read the configuration file we have just created and publish the application proxy on port 8080.

$ docker run \
    --name nginx \
    -v $(pwd)/nginx-conf.d:/etc/nginx/conf.d \
    --net=example \
    -p 8080:80 \
    -d \
    nginx

See that it works.

$ curl http://localhost:8080
Hello, world!

Now we make a new version of the service by changing the greeting in the hello.py file.

$ cat hello.py
from wsgiref.simple_server import make_server

GREETING = b'Hello, world! (VERSION 2)\n'  # <=== Change here

def hello(environ, start_response):
        start_response('200 OK', [('Content-Type', 'text/plain')])
    return [GREETING]

make_server('0.0.0.0', 9000, hello).serve_forever()

We start the application again, this time in the GREEN environment. Notice the change in the container name. Otherwise the command is the same as before.

$ docker run \
    --name hello-GREEN \  # <=== Change here.
    -v $(pwd)/hello.py:/usr/local/src/hello.py \
    --net=example \
    -d \
    python:3 \
    python /usr/local/src/hello.py

We have to change the name of the application container in the Nginx configuration as well.

$ cat nginx-conf.d/hello.conf
server {
    listen 80;
    location / {
            proxy_pass http://hello-GREEN:9000;  # <=== Change here
    }
}

Before switching active environments, we must check that the configuration is correct and that the application is reachable. There is an Nginx command line option for that, so we run it in a temporary container.

$ docker run \
    --rm \
    -v $(pwd)/nginx-conf.d:/etc/nginx/conf.d \
    --net=example \
    nginx \
    nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If the check was successful we send the Nginx container a SIGHUP signal. This will make it re-read the configuration and restart itself without dropping any requests.

$ docker kill -s HUP nginx

And we can see that we have a new version deployed.

$ curl http://localhost:8080
Hello, world! (VERSION 2)

The BLUE environment is no longer needed, so we remove it.

$ docker stop hello-BLUE
$ docker rm hello-BLUE

Conclusion

In our example, we only had one application container for the service. There could be more and they would all be manipulated together. There could also be more services running on the same machine. They would each have its own BLUE and GREEN environments.
The Nginx container and other infrastructure containers with state are not deployed using this technique. This includes databases and most caches. We can’t stop or restart databases because the active environments are using them even during the switch, and we usually don’t want to clear caches on each deployment.
There is more to making this approach viable for automated deployment. We haven’t mentioned how to determine which environment is currently active for a given service and machine. Also, manually editing configuration files before each deployment is clearly not the way to go. At Klokan Technologies, we actually use Ansible with a custom module to solve both problems, but that is a story for another day.
Post a Comment