神刀安全网

Straight to production with Docker, Ansible and CircleCI

Docker shakes up the way we use to put into production. In this article I’ll present

the main obstacles I encountered to set up the production workflow of a simple Node.js API called cinelocal .

Step 1: set up a development environment

Docker-compose is a tool for defining and running multi-container Docker applications. Cinelocal-api requires 3 services running in 3 containers:

Here is the corresponding docker-compose.yml defining the 3 services and their relations (read more about compose files ):

 # docker-compose.yml data:   image: busybox   volumes:     - /data  db:   image: postgres:9.4   volumes_from:     - data   ports:     - "5432:5432"  api:   image: node:wheezy   working_dir: /app   volumes:     - .:/app   links:    - db   ports:     - "8000:8000"   command: npm run watch   

Notice the .:/app line in the API container that mounts the current folder as a container’s volume so when you edit a source file it will be detected inside the container.

The npm command of the API container is defined in the package.json file. It runs database migrations (if any) and starts nodemon which is a utility that monitors for any change in your source and automatically restarts your server.

package.json:

 {   "scripts": {     "watch": "db-migrate up --config migrations/database.json && node ./node_modules/nodemon/bin/nodemon.js src/server.coffee"   } } 

Now the API can be started using the command docker-compose up api (it might crash the first time because the node container does not wait for the postgres container to be ready. It will work the second time. This is a known compose issue ).

Unfortunately using Docker adds a layer of complexity to the usual commands such as installing a new Node.js package or creating a new migration because it must be run in the container. So:

  • All your commands should be prefixed by docker-compose run --rm api
  • The edited files ( package.json with npm install or migration files with db-migrate) will be owned by the docker user.

To bypass this complexity, you can use a Makefile that provides a set of commands.

 # Makefile whoami := $(shell whoami)  migration-create:     docker-compose run --rm api /     ./node_modules/db-migrate/bin/db-migrate create --config migrations/database.json $(name)/      && sudo chown -R ${whoami}:${whoami} migrations  migration-up:     docker-compose run --rm api ./node_modules/db-migrate/bin/db-migrate up --config migrations/database.json  migration-down:     docker-compose run --rm api ./node_modules/db-migrate/bin/db-migrate down --config migrations/database.json  install:   docker-compose run --rm api npm install  npm-install:     docker-compose run --rm api /     npm install --save $(package)/     && sudo chown ${whoami}:${whoami} package.json 

Now to install a package you can run: make npm-install package=lodash or to create a new migration: make migration-create name=add-movie-table .

Step 2: Provisioning a server

With Docker, whatever your stack is, the provisioning will be the same. You’ll have to install docker and optionally docker-compose, that’s it.

Ansible is a great tool to provision a server. You can compose a playbook with roles found on ansible galaxy .

To install docker and docker-compose on a server:

 # devops/provisioning.yml - name: cinelocal-api provisioning   hosts: all   sudo: true   pre_tasks:     - locale_gen: name=en_US.UTF-8 state=present   roles:     - angstwad.docker_ubuntu     - franklinkim.docker-compose   vars:     docker_group_members:       - ubuntu     update_docker_package: true 

Before running the playbook you need to install the roles:

 ansible-galaxy install -r devops/requirements.yml -p devops/roles 

with:

 # devops/requirements.yml - src: angstwad.docker_ubuntu - src: franklinkim.docker-compose 

I tested this provisioning with Ansible 2.0.2 on Ubuntu Server 14.04.

 # Makefile install:   ansible-galaxy install -r devops/requirements.yml -p devops/roles  provisioning:     ansible-playbook devops/provisioning.yml -i devops/hosts/production 

Step 3: Package your app and deploy

Each time I deploy the API, I build a new Docker image that I push on Docker Hub (the GitHub of Docker images).

The construction of the API image is described in a Dockerfile :

 FROM node:wheezy  # Create app directory RUN mkdir -p /app WORKDIR /app  # Install app dependencies COPY package.json /app/ RUN npm install  # Bundle app source COPY . /app  EXPOSE 8000 CMD [ "npm", "start" ] 

To build and push the image on Docker Hub, I added these two tasks in the Makefile:

 # Makefile build:     docker build -t nicgirault/cinelocal-api .  push: build     docker push nicgirault/cinelocal-api 

Now make push builds the image and pushes it on Docker Hub (after authentication).

In development environment I want to mount my code as a volume whereas it should not be the case in production. Using multiple Compose files enables you to customize a Compose application for different environments. In our case, we want to split the description of the api service in a common configuration and a environment specific configuration.

 # docker-compose.yml (common configuration) api:   working_dir: /app   links:    - db   ports:     - "8000:8000"   environment:     DB_DATABASE: postgres     DB_USERNAME: postgres 
 # docker-compose.dev.yml (development specific configuration) api:   image: node:wheezy   volumes:     - .:/app   command: npm run watch 
 # docker-compose.prod.yml (production specific configuration) api:   image: nicgirault/cinelocal-api 

To merge the specific configuration into the common configuration:

     docker-compose -f docker-compose.yml -f docker-compose.dev.yml up api   

By default, Compose checks the presence of docker-compose.override.yml so I renamed docker-compose.dev.yml to docker-compose.override.yml.

Now I can deploy the API using 3 commands described in a simple Ansible playbook:

 # devops/deploy.yml - name: Cinelocal-api deployment   hosts: all   sudo: true   vars:     repository: https://github.com/nicgirault/cinelocal-api.git     path: /home/ubuntu/www     image: nicgirault/cinelocal-api   tasks:     - name: Pull github code       git: repo={{ repository }}            dest={{ path }}      - name: Pull API container       shell: docker pull {{ image }}      - name: Start API container       shell: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d api       args:         chdir: {{ path }} 

In the Makefile:

 deploy: push     ansible-playbook -i devops/hosts/production devops/deploy.yml 

make deploy builds the image, pushes it and runs the playbook.

Read more about docker-compose in production .

Note: Ansible embeds docker commands that avoid installing docker-compose on the server but force to duplicate the docker architecture description. Although I didn’t use it for this project you might consider using it.

Bonus: continuous integration

This section explains how to automatically deploy on production when merging on the master branch if the build passes.

This is quite simple with circleCI and Docker Hub:

Here is a circle.yml file that runs the tests and deploys if the build passes provided the destination branch is master:

 machine:   services:     - docker   python:     version: 2.7.8   post:     # circle instance already run postgresql     - sudo service postgresql stop  dependencies:   pre:     - pip install ansible     - pip install --upgrade setuptools    override:     - docker info     - docker build -t nicgirault/cinelocal-api .  test:   override:     - docker-compose run api npm test  deployment:   prod:     branch: master     commands:       - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS       - docker push nicgirault/cinelocal-api       - echo "openstack ansible_host=$PROD_HOST ansible_ssh_user=$PROD_USER" > devops/hosts/production       - ansible-playbook -i devops/hosts/production devops/deploy.yml 

In addition you’ll have to:

  • define the environment variables used in this file in the circleCI project settings page
  • authorize circleCI to deploy on your server:
    1. generate a ssh key pair (use the command ssh-keygen)
    2. add the private key on the project settings on circleCI interface
    3. add the public key on the ~/.ssh/autorized_keys on the server

From now deploying on production will be as simple as merging a branch to master.

At Theodo we love to deploy our code on production! If you liked this article, you’d probably be a good match for our ever-growing tech team at Theodo.

Join Us

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Straight to production with Docker, Ansible and CircleCI

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址