How to Run Multi-Container Applications with Docker Compose


Docker is not new in the world of software development.
Docker is a tool designed to make software easier to create, to deploy, and to run applications by using containers.
Containers allow a developer to package up an application with all of the parts needed, such as libraries and other dependencies, and ship it all out in one package. It helps the deployment and development to be less stressful and it saves the deployment time and problems related to its environment within team members.
It’s helped to eliminate the old “but, it only works on my machine” problem. Unlike creating a virtual OS like VMs, Docker allows for applications to use the same Linux kernel as the host machine. This makes Docker much more light-weight and have an enhanced performance than VM’s. Docker’s vision is the Dockerized future, where nearly all software is developed and deployed in containers.

In our previous Docker article, we introduced some basic Docker knowledge and commands to see how Docker really works. In this article, we’re going to focus exclusively on Docker Compose, which is a combination of a command line tool and a configuration file.
As you learn one component, you may think: “Wow, bind mounts and volumes are really cool. That’s going to save me some time. I can imagine how I’m going to use that,” and so on. But, with Docker Compose, it’s even better than that. So, what is it? And why do we care?
So few software services are truly standalone programs. We hope that after reading this article, you will have a better idea of what Docker Compose really is.


When you think about containers, they’re a single process solution. We’re rarely going to use just a single container to solve a problem or provide a service to our customers. Our containers will often require other containers such as SQL or a key value. And other applications that we’ll need to run in containers, like proxies or web frontend or backend workers, and so on.
But what if we had a way to connect all those pieces of our solution together? Better yet, where we didn’t need to remember all of our docker run options. And then we had the ability to get them into discreet, virtual networks with relationships between them, and only expose the public ports. And finally spin them all up and tear them down with one command? We can do it all with Docker Compose!

Before we dive in, we need to distinguish that there are two parts to Docker Compose.

  • The first part is the YAML file and it’s fairly simple to understand.
  • It’s written in YAML, and if you’ve never dealt with YAML before, it’s a very easy configuration language.
  • It’s almost as easy as an INI file would be. We actually think it’s easier because it shows hierarchy.

Let’s dive in. That file is where you would specify all the containers you need to run, the networks you need, any volumes you might need, environment variables, images, and all sorts of other configuration options.

  • The second part of Docker Compose is a CLI tool, which is docker -compose command that we normally use for just local dev and test, using that YAML file we created to simplify our Docker commands.

The Docker-Compose file should be in this template like so:

version: '3.1'  # if no version is specified then v1 is assumed.
services:  # containers. same as docker run
 servicename: # a friendly name. this is also DNS name inside network
   image: # Optional if you use build:
   command: # Optional, replace the default CMD specified by the image
   environment: # Optional, same as -e in docker run
   volumes: # Optional, same as -v in docker run
volumes: # Optional, same as docker volume create
networks: # Optional, same as docker network create

The version value is in the beginning.

We always make it at least version 2. If you don’t add that line there, it’s always assumed to be version 1. We don’t recommend this because you run the risk of losing a lot of the main features.
The other three main sections are services, volumes and networks. So, this is hierarchical. Under services, everything can be two or four spaces, but it does need to be consistent. For more specifics and guides on how to format a YAML file, we recommend this resource.
Next, on to services.
Usually, you have to specify a minimum number of services, or in this case, containers.
The reason they’re actually called services is because each container that you create in here, could actually have multiple ones of those containers for redundancy. So they needed to come up with a different word.
Each container, or containers, that are identical, that you’re using are usually providing a service to something. Underneath the service name, and again you can call it whatever you want.It doesn’t have to be the name of the image or anything related to it at all. It’s similar to when you give the name to a Docker run command.
Basically, all these things we do from the Docker run command, we can save them here in order to save time. Because that’s what this really is all about. It’s about taking all of the work out of remembering all the different components in Docker run command for those things that are constantly running.
If you have your own development environments, or your own tools, you probably don’t want to type those commands over and over again. And shell scripts can only do so much. So, this file would actually replace a shell script that would automate your Docker run commands. This is a much better way to do it.
If you had multiple services, you just need to make sure they have unique names. The other parts that need specifications are volumes and networks. And again, these are optional as well. It’s basically the same rules as when you’re running from the command line.

  • Volume and network are both optional based on your situation.

If you were to ever need to use the Docker volume create or the Docker network create commands, you can put them here as well.
Now, let’s look at a real world example here:

version: '2'
   image: wordpress
     - 8080:80
     WORDPRESS_DB_NAME: wordpress
     WORDPRESS_DB_USER: ducfilan
     - ./wordpress-data:/var/www/html
   image: mariadb
     MYSQL_ROOT_PASSWORD: r00tPa$w0rd
     MYSQL_DATABASE: wordpress
     MYSQL_USER: ducfilan
     MYSQL_PASSWORD: Pa$w0rd
     - mysql-data:/var/lib/mysql

You’ll notice one thing that’s different about this one is that we’re using the environmentThis is actually a sample WordPress setup where you have the database service down here that’s running a database container and then the WordPress web server container up top.
variables, which would be the same as an -e at the command line. But instead of a list format where we have a , there is another key and a value statement. In this format, we don’t use the , we just list the key and the value. If we needed to add another one, we would do that here. All of these would be passed into the container, when it runs, using the compose command line.
Shall we try a more complicated example?

version: '3'
   image: ghost
     - "80:2368"
     - URL=http://localhost
     - NODE_ENV=production
     - MYSQL_HOST=mysql-primary
     - MYSQL_PASSWORD=Pa$w0rd
     - MYSQL_DATABASE=ghost
     - ./config.js:/var/lib/ghost/config.js
     - mysql-primary
     - mysql-secondary
   image: percona/proxysql
     - CLUSTER_NAME=mycluster
     - CLUSTER_JOIN=mysql-primary,mysql-secondary
     - MYSQL_PROXY_USER=proxyuser
   image: percona/percona-xtradb-cluster:5.7
     - CLUSTER_NAME=mycluster
     - MYSQL_DATABASE=ghost
     - MYSQL_PROXY_USER=proxyuser
   image: percona/percona-xtradb-cluster:5.7
     - CLUSTER_NAME=mycluster
     - CLUSTER_JOIN=mysql-primary
     - MYSQL_PROXY_USER=proxyuser
     - mysql-primary

In the top, we have a Ghost container. Next, we have a SQL Proxy, which sits in front of MySQL servers and acts as a load balancer and failover solution. Then we have each of our two, MySQL servers. You’ll notice another option down here that says, ‘depends on‘ which is a pretty common term in this type of program.This used to set up a three-database server cluster behind a Ghost web server. Ghost is actually a blog system similar to WordPress. It defines multiple environment variables for each one of these containers.
It basically helps Compose understand the relationship between these services. It knows that if we need to start our Ghost service, then we also need to start other ones as well. You can see up here that our Ghost service actually depends on all of these features.
This can be a lot to digest at first but trust that Docker has some really great documentation readily available to gain a better understanding of the benefits.
It’s going to be under the website, and it’s going to be documentation about every single little key and value option out there. You’ll see the depends we just covered as well as environment where it shows different ways to put in environment variables. What’s great about this documentation is it gives you plenty of examples to work with.
Now, we hope that you have a general idea of what the Docker compose file format might look like. In the next article, we’ll head to the command line and learn about the Docker Compose CLI tool. See you there!

Already know this stuff and ready to join our exclusive network of top developers? Great. Let’s talk.