Docker in your production server – I

This is the first of a post series explaining how to run your applications in a docker production server.

We need to install docker (>=1.10.0).

Running applications in docker

Docker means isolation. You must isolate each service in a different container. This means, you will run your front, backend and database in different containers.

We are going to use docker-compose (>=1.6.0) to accomplish this task. It’s the docker standard, which simplifies the classic interaction with the docker API by using classic command docker run.

Basic docker-compose.yml to launch a recipes web:

version: '2'

volumes:
  database:
  static_files:
  media_files:

services:
  nginx:
    image: pando85/openeats-nginx:latest
    links:
      - web
    volumes_from:
      - web
    ports:
      - "8080:80"

  web:
    image: pando85/openeats:latest
    links:
      - database
    expose:
      - "8000"
    environment:
      - DEBUG=False
    volumes:
      - static_files:/usr/src/app/openeats/static
      - media_files:/usr/src/app/openeats/site-media
    command: ["/usr/local/bin/gunicorn", "openeats.wsgi:application", "-b :8000", "-w 6", "-t 5000"]

  database:
    image: postgres:9.6
    volumes:
      - database:/var/lib/postgresql
    environment:
      - POSTGRES_DB=openeats
      - POSTGRES_USER=openeats
      - POSTGRES_PASSWORD=admin
    expose:
      - "5432"

This docker-compose contains one service (a recipe website) divided in three different containers:

First of all, run this command to initialize the database:

docker-compose up -d database && \
docker-compose run --rm web python manage.py makemigrations && \
docker-compose run --rm web python manage.py migrate && \
docker-compose run --rm web python manage.py collectstatic --noinput && \
docker-compose run --rm web python manage.py createsuperuser

After that, we just need to run docker-compose up -d. This will start the web and nginx containers in daemon mode.

We can check that all is working in http://127.0.0.1:8080.

Exposing multiple applications

Let’s say we want to deploy a Gitblit server for our git repositories, too.

We can write our docker-compose.yml:

version: '2'

services:
  gitblit:
    image: jacekkow/gitblit
    ports:
      - "8081:8080"
      - "9418:9418"
      - "29418:29418"

Finally, we just need to execute docker-compose up -d to run all applications at once.

The result is:

Using a reverse proxy

A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers.

In other words, it is a service that listen all http/s requests and redirects each of them to its appropiate server.

We are going to use NGINX as our reverse proxy.

Create file default.conf with NGINX basic configuration.

server {
    server_name recipes.localhost;
    listen 80 ;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

server {
    server_name gitblit.localhost;
    listen 80 ;

    location / {
        proxy_pass http://127.0.0.1:8081;
    }
}

In the same directory, create docker-compose.yml:

version: '2'

services:
  proxy:
    image: nginx
    volumes:
     - ./default.conf:/etc/nginx/conf.d/default.conf
    network_mode: host

We need to add these domains to our /etc/hosts to resolve them:

sudo sh -c 'echo "127.0.0.1 recipes.localhost gitblit.localhost" >> /etc/hosts

The result is:

Automatization!

The funniest part! Different applications have already been deployed in one server, but what if we wanted to deploy a new one? We would have to edit the NGINX configuration and restart it manually. That’s not our philosophy.

Instead of that simple NGINX we could try jwilder/nginx-proxy image.

Automated nginx proxy

jwilder/nginx-proxy allows us to automatize applications deployments.

It can be deployed as simple as this:

version: '2'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

Note: There is a bug with docker-compose version 2. A work arround is to add network_mode: “bridge” in all containers. For example:

version: '2'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    network_mode: "bridge"
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

Add it to all containers

To get it working, all applications need to be modified to include the VIRTUAL_HOST and VIRTUAL_PORT environments: openeats/docker-compose.yml:

version: '2'

volumes:
  database:
  static_files:
  media_files:

services:
  nginx:
    image: pando85/openeats-nginx:latest
    network_mode: "bridge"
    environment:
      - VIRTUAL_HOST=recipes.localhost
    links:
      - web
    volumes_from:
      - web
    expose:
     - 80

  web:
    image: pando85/openeats:latest
    network_mode: "bridge"
    links:
      - database
    expose:
      - "8000"
    environment:
      - DEBUG=False
      - DATABASE_NAME=openeats
      - DATABASE_USER=openeats
      - DATABASE_PASSWORD=admin
      - ALLOWED_HOST=recipes.localhost
    volumes:
      - static_files:/usr/src/app/openeats/static
      - media_files:/usr/src/app/openeats/site-media
    command: ["/usr/local/bin/gunicorn", "openeats.wsgi:application", "-b :8000", "-w 6", "-t 5000"]

  database:
    image: postgres:9.6
    network_mode: "bridge"
    volumes:
      - database:/var/lib/postgresql
    environment:
      - POSTGRES_DB=openeats
      - POSTGRES_USER=openeats
      - POSTGRES_PASSWORD=admin
    expose:
      - "5432"

gitblit/docker-compose.yml:

version: '2'

services:
  gitblit:
    image: jacekkow/gitblit 
    network_mode: "bridge"
    environment:
      - VIRTUAL_PORT=8080
      - VIRTUAL_HOST=gitblit.localhost
    expose:
      - 8080
    ports:
      - "9418:9418"
      - "29418:29418">/code>

As you can see, we changed the port mapping to an simple expose, Gitblit still needs to expose 9418 and 29418.

And you can enjoy it now!

All code can be found in github!!!


Resources

The header image is taken from www.GlynLowe.com

mm

Alexander Gil

Even though I studied industrial engineering at the University, I was always attracted by microcontrollers and programming. I began learning Linux a long time ago and embraced the philosophy of Free Software. I'm currently working at Datio as a system administrator and I really love to automate any kind of process. Python, Ansible and Terraform are my best friends, but ZFS is always on my mind.

More Posts

3 thoughts on “Docker in your production server – I”

  1. Not a great post to be honest.
    You start with all the wrong assumptions (Docker means isolation – It doesn’t. You must isolate each service in a different container – You don’t. This means, you will run your front, backend and database in different containers. Why?)
    You offer no explanation or incentive of why this is worth considering, this reaching and providing no conclusions.

    In addition, this post shows a very “old-world” style of deployment. If you are using docker this way, the ends probably don’t even justify the means.
    All in all, I am getting the feeling from this post that you probably don’t get what docker is all about.

  2. Hi Will,

    I “will” try to be more respectful than you…You said that Docker does not mean “isolation” and you suggest that the right way is to deploy several services in the same docker…
    However You dare to say that Alexander has not understood anything. do you?
    One simple reason: Maybe somebody wants to scale front in a different way than backend.
    Actually, I think is more difficult to find some scenario that justify to deploy front, backend and database in the same docker…

    Could you illuminate us?

  3. I agree with Will. Taking into consideration the statement “Docker means isolation”. I think that the author doesn’t even know the difference between a hypervisor and a container. In the proposed example, a malicious process could even sniff all the data from that containers if that malicious program is running on the same machine.

    Datio and the author should perform a better revision of this contents before publishing.

Comments are closed.