Posted on
ugly sketch: by me

Working with docker-compose? And as soon as it involves more than one repository or more than one “service” things get messy. Cage brings some order to that chaos and is a tremendous help in such setups.

To understand Cage you should understand the workflow it is made for, and what the faraday.io people believe is the right workflow to have:

  • One repository for the infrastructure of a multi-service stack. For instance ten docker-compose.yml’s for services that are deployed together.
  • You should then be able to download with one command cage pull all ten CI server-built docker images from the image repository to your local computer.
  • You should then be able to check out the source code for a single one of the ten services, modify it, rebuild just the one, test both the individual service as well as the integration with the others and commit your changes to run through the CI server again.

And that’s it. In addition, Cage features common files to place environment variables and overwrites for different deployment environments.


Now enough of the praise. In summary, if you have more than one service with docker-compose, Cage will take care of you. I will now walk you through two examples which are fully functioning. You can code along if you clone this repository, but it’s not mandatory.

Minimal Example Setup

Let’s envision a simple setup, one service in one container with a Postgres database. We define everything in one repository, that includes:

  • the source code for our one service and
  • the infrastructure code, that is the docker-compose.yml.

This is the code for the minimal example. Cage is mostly a way of structuring your code. Here is the simplest possible structure:

.
├── Makefile # with "make dep" to downlaod cage
├── README.md
├── cage # in .gitignore
├── pods
│ ├── common.env # empty file
│ ├── db.yml # docker-compose.yml for a Postgres
│ ├── service.yml # docker-compose.yml for our service
│ └── targets
│ └── development # one deployment target for cage.
└── src
└── Dockerfile # source code: a plain httpd-alpine image

For the second example, we will add more. For now, we are satisfied with:

  • The tool cage, which you can get by modifying the Makefile to match your operating system (default is set to mine: OSX), and run make dep.
  • A folder called “pods” containing different individually deployed services.
  • pods/db.yml is the docker-compose.yml for the Postgres
  • pods/service.yml is the docker-compose.yml for our dummy service
  • src/Dockerfile is the source file for our dummy service.
  • targets/development/common.env and pods/common.env are empty files to create at least one target environment.

Let’s Have Some Fun

You can first read through this, then do the exercises in the minimal example README if you fancy. First, because cage is made for central image repositories, and we don’t have one available for our dummy image, we have to build the image first.

$ ./cage build 
...
db uses an image, skipping
Building dummy_service
Step 1/1 ...
...
Successfully tagged test/image:latest

We just built all the images. Usually, we just want to build one of them, but now that we have them, we can fire up the whole stack.

$ ./cage up
Starting exampleminimal_db_1 .... done
...
Starting exampleminimal_dummy_service_1 ... done

We could’ve also just upped one service with $ ./cage up service_1, in case we modified it’s code and wanted to check that this one works.

If you’re wondering what a pod, a service, and a container are, cage provides a good description at their basics documentation under “Key Terms”.

Now let’s try to do actual work.


We will change one of the services. For that, we need to mount the service. Remember, the usual setup should be 20 micro-services running together, so each of them should be in their own repository nicely decoupled. We can check out which source codes we can mount with

$ ./cage source ls 
src        .../src
$ ./cage source mount src
Now run cage up for these changes to take effect.

and mount them with the second command. This is the source for service_1.

Now go to the Dockerfile and change the version.

FROM httpd:2-alpine # change i.e. to httpd:2

Great so let’s see the changes take effect

$ ./cage build
...
Building dummy_service
Step 1/1: FROM httpd:2
...
$ ./cage up
exampleminimal_db_1 is up-to-date
...
Recreating example_minimal_dummy_service_1 ... done

So as you can see, the source code change triggered a new build, and the database stayed in place. You can stop everything with $ ./cage stop and proceed to the second example.

Let’s Have Some More Fun

It’d be way more fun to understand a little bit about targets, as well as about external repositories. After all, the idea is that those repositories are hosted in version control, not on your local machine.

That’s what I included in a light version in the second example. The file tree for the second example looks like this.

.
├── Makefile # run make dep to get cage into this folder
├── cage # is in .gitignore
├── pods # now contains two services.
│ ├── common.env # empty
│ ├── service_1.yml # This contains two containers in one "service", a db and another container
│ ├── service_2.yml # Contains an external container, gruntworks shellcheck
│ └── targets # Contains two environments to play around with
│ ├── development
│ └── production
├── src # not there in your example, as this is the mounted src code...
│ └── bash-commons
│ ├── CODEOWNERS
│ ├── Dockerfile.bats
│ ├── Dockerfile.shellcheck
│ ├── LICENSE
│ ├── NOTICE
│ ├── README.md
│ ├── docker-compose.yml
│ ├── modules
│ └── test
└── src_service1 # again an httpd:2-alpine image...
└── Dockerfile

Again we will start with building the images. Except that this time, we added a little complexity. In particular, service_2 is an externally hosted image. I choose the shellcheck image from gruntworks. Let’s hit the build button.

$ ./cage build
...
Building dummy_service
Step 1/1: FROM httpd:2
...
Building shellcheck
Step 1/8: ...
...

After building, again we can up the whole stack, and this time we get three containers, defined in two docker-compose.ymls.

$ ./cage up
Recreating example2_db_1 ... done
Recreating example2_dummy_service_1 ... done
Recreating example2_shellcheck_1 ... done

Again, we will change something in the source. This time the source code is hosted externally. But that doesn’t change the workflow, we do just the same steps, and end up with a git in git situation:

$ ./cage source ls 
bash-commons ...
src_service1 ...
$ ./cage source mount bash-commons
$ ls src
bash-commons
...# change the image version to something else...
$ ./cage build
...

So external repositories are behave just the same way local code does. You can mix and match to match your situation.

What about the targets? I’ve included a simple example which we can look at. By default, our stack was upped in the “development” environment. If we up it in the “production” environment, everything we put into that folder will override parts of the old files, including either variables or docker-compose configurations. I just chose to override the Postgres version like so in the targets/production/service_1.yml:

version: "2"
services:
db:
image: "postgres:9.4"

You can check it out in action by deploying to production.

$ ./cage --target production up
Pulling db (postgres:9.4)
...

...
$ ./cage stop

And that’s it! Seems to me like a huge improvement in the workflow. If you do decide to dive into cage, take a look at two additional things which are

  • metadata
  • one time tasks which can be run like tests in a “test” environment.

Ressources

All the resources are important as the documentation is somewhat scattered across those different places:

Enjoy!

Leave a Reply