Integrating Docker with Jenkins for continuous deployment of a Ruby on Rails application

29 January 2014

For the past few weeks, I have been working on integrating Docker and Jenkins in order to improve the continuous integration workflow of a project I work on.

The application consists of the following packages and services:

  • Ruby on Rails application (called ruby_app)
  • MySQL database
  • RabbitMQ messaging system
  • Apache Solr search platfrom

Here is a short description of the workflow that I wanted to have as an end result:

  1. Jenkins builds all the Docker images from the provided Dockerfiles.
  2. If the Docker images were built successfully, Jenkins runs containers from them, links them (ruby_app to MySQL, RabbitMQ, and Solr) and executes RSpecs for ruby_app.
  3. If the tests pass successfully, Jenkins tags and pushes the new ruby_app image to a private Docker repository, to be pulled for the staging and production servers.
  4. Staging server pulls the latest image for ruby_app from the private Docker repository, runs it and updates the Hipache proxy with the new container.


The first step is to install Jenkins on the CI server. You could either install it manually or run it from a Docker image. For our purposes, we decided to install it.

We then created a Jenkins "Job" for ruby_app. Here is a gist of it:


rm -rf docker-jenkins-ci-scripts

git clone

cd docker-jenkins-ci-scripts

chmod +x *.sh


if [[ $rc != 0 ]] ; then
    echo -e "Docker images build failed."
    exit $rc

echo -e "Docker images build passed successfully."


if [[ $rc != 0 ]] ; then
    echo -e "Tests failed."
    exit $rc

echo -e "Tests passed successfully. Pushing ruby_app image to local private repository."

docker tag ruby_app localhost:5000/ruby_app
docker push localhost:5000/ruby_app

echo -e "Tested image pushed successfully to local repository."

Basically we checkout a git repository, build all the images, run RSpec tests on ruby_app, and push it to the repository if successful.

The script goes along the lines of:


# Tests if an image has already been built. Images like "mysql", "rabbitmq", "solr", etc. don't have to be re-build often.
function check_if_exists_and_build {
  echo -e "Testing whether $1 image has been built? \n"

  if docker images | grep -w $1
    echo -e "$1 already exists. do not build. \n"
    echo -e "$1 does not exists. building now... \n"

    build $1

# Builds a given image
function build {
  rm -f docker-built-id
  docker build -t $1 ./$1 \
    | perl -pe '/Successfully built (\S+)/ && `echo -n $1 > docker-built-id`'
  if [ ! -f docker-built-id ]; then
    echo -e "No docker-built-id file found."
    exit 1
    echo -e "docker-built-id file found, so build was successful."
  rm -f docker-built-id

check_if_exists_and_build solr
check_if_exists_and_build mysql
check_if_exists_and_build rabbitmq
build ruby_app

Basically we check if an image has been built, and build it if it is missing. Images for MySQL, RabbitMQ, etc. do not have to be rebuild often, so we build them only once and make sure they are working. However since we are developing ruby_app, we build it every time Jenkins runs.

The goes along the following lines:


MYSQL=$(docker run -p 3306:3306 -name mysql -d mysql)
RABBITMQ=$(docker run -p 5672:5672 -p 15672:15672 -name rabbitmq -d rabbitmq)
SOLR=$(docker run -p 8983:8983 -name solr -d solr)

echo -e "Running tests for ruby_app... \n"

docker run -privileged -p 80 -p 443 -name ruby_app -link rabbitmq:rabbitmq -link mysql:mysql -link solr:solr -entrypoint="/opt/" -t ruby_app | perl -pe '/Tests failed inside docker./ && `echo -n "Tests failed" > docker-tests-failed`'

if [ ! -f docker-tests-failed ]; then
  echo -e "No docker-tests-failed file. Apparently tests passed."
  echo -e "docker-tests-failed file found, so build failed."
  rm docker-tests-failed
  exit 1

docker kill mysql rabbitmq solr ruby_app
docker rm mysql rabbitmq solr ruby_app

We run the ruby_app container, linking it to MySQL, RabbitMQ and Solr, overwriting the default ENTRYPOINT with the script. By default, ruby_app executes rake db:migrate, rake assets:precompile and rails s. However for the CI run, we execute rake db:migrate and rspec.

For some reason Docker does not return proper error codes if RSpec fails. Therefore I echo "Tests failed inside docker" inside, which I then detect from Jenkins.


rake assets:precompile


if [[ $return_code != 0 ]] ; then
  echo -e "Tests failed inside docker."
  exit $return_code

The last step of the setup is to have the staging server detect when a new image is pushed to the local repository.

Staging server

For our purposes, I wrote a short bash script, which runs every 30min via cron, and checks whether a new image has been pushed to the private docker repository:


CURRENT_IMAGE_ID=`docker images | grep -w ruby_app | awk '{ print $3 }'`

docker pull $REPO/ruby_app

NEW_IMAGE_ID=`docker images | grep -w ruby_app | awk '{ print $3 }'`

  echo -n "Image ids are equal. Therefore we have no new image."
  echo -n "Image ids are not equal. Therefore we should stop old image and start new one."

  docker kill ruby_app
  docker rm ruby_app

  docker run -privileged -p 80 -p 443 -name ruby_app -link rabbitmq:rabbitmq -link mysql:mysql -link solr:solr -volumes-from ruby_app_data -t ruby_app

If a new image is detected, the old container is stopped and removed, and the new container is started. ruby_app_data is a data-only container, allowing us to persist data when update the ruby_app container.

However this results in a short downtime, while the app server inside ruby_app is being started. Ideally we would have Hipache installed and running, and switch ruby_app containers as soon as the new one is ready to process requests. This could easily be done via an application like Shipyard. Configuring an "Application" inside Shipyard (which uses Hipache in the background), resulted in errors for me:

(worker #96) staging: backend #0 reported an error ({"bytesParsed":0,"code":"HPE_INVALID_CONSTANT"}) while handling request for /

UPDATE: Because of this, I install manually Hipache with a Redis server on the staging server. I have the following bash script, which is run periodically by cron, and checks the health of the new container. As soon as the new container is up and responds to HTTP requests, it is loaded to Hipache:


if [ -f new_container ]; then
  echo -e "New application container has been started, but not loaded in Hipache yet. \n"

  RESPONSE_CODE=`wget --no-check-certificate -S "https://$NEW_CONTAINER_GATEWAY:$NEW_CONTAINER_PORT_443/" 2>&1 | grep "HTTP/" | awk '{print $2}'`

  if [ "$RESPONSE_CODE" == "200" ]
    redis-cli rpop

    redis-cli rpush https://$NEW_CONTAINER_GATEWAY:$NEW_CONTAINER_PORT_443

    echo -e "Successfully pushed new container IP to Hipache's Redis \n"
    echo -e "Response code is different to 200. \n"
  echo -e "No new container detected. Nothing to do. \n"

Feel free to post any questions or improvements you might have. Source code shown above can be found at this repository in Github.


Continuous Delivery with Docker and Jenkins - part II

Using Docker To Run Ruby Rspec CI In Jenkins

Persistent volumes with Docker – Data-only container pattern

Subscribe to our newsletter to receive our latest posts

Just enter your e-mail address below to receive a notification for updates on our blog.

blog comments powered by Disqus

Follow us on