A question I’ve been asked is about deploying containers across swarm clusters using docker-compose. In particular, since compose itself has no declarative syntax for scaling services (only imperative via the commandline), what does/could Yipee.io bring to the table to assist with scaling such services on a swarm cluster.

As background, the docker tools for scheduling containers across nodes in a swarm cluster really depends on what version of Docker engine you’re using, how the swarm cluster was setup, and even the OS on which you’re running your local docker engine.

Docker 1.11 and below includes (actually requires you to install) a version of Docker swarm with which you set up your Swarm cluster. Once setup, you can docker-compose up your file and the containers will be scheduled across the nodes in your swarm cluster. Scaling a service in your compose file will likewise schedule those services containers across the various hosts in the cluster (subject to any constraints you impose). It’s important to note that the scaling in this previous version of swarm is a “fire and forget” model where there’s nothing to monitor the running containers and reschedule them to other nodes if a node fails.

Docker 1.12.1 introduces a new “Swarm Mode” that is built into the engine, which is similar to the previous swarm standalone product but introduces the ability to not only scale your services but to also ensure that the number of service containers you want running will be maintained, even if a container or node fails. The downside though is that docker-compose itself doesn’t spread containers across your 1.12-based swarm cluster in this case (it will just run them all on the swarm manager node). Sure, you could setup a swarm cluster using the older version of swarm, but you’re back to not having the monitoring/rescheduling capabilities of the new Swarm mode. To spread them across nodes in the new Swarm mode, you need to use “docker bundle” to first create a Distributed Application Bundle (DAB) file and then deploy it.

Here’s where the local docker-engine platform part comes in. Normally you would then use the command “docker deploy -f ” or “docker stack deploy …” This will then deploy your containers across the swarm mode cluster as before. To the scale up/down a service in your stack, you would use “docker service scale =” or “docker service update –replicas “.

Unfortunately, in the version of Docker for linux in the 1.12.1 release, the “deploy” subcommand (and “stack” subcommand) aren’t available as those features are still experimental. To enable them, you need to download the code and compile it in yourself (there are some directions here). If you’re running Docker for Mac or Docker for Windows then those subcommands are enabled (at least in the beta version – I haven’t checked the stable version).

An alternative is to not use DAB files at all and manually create your services with the “docker service create” command. The downside is that you then don’t have your single application blueprint which you can distribute and deploy.

But there’s a caveat with DAB files. They don’t support global volumes or global networks, nor do they support links. Here’s output of running the bundle subcommand that contains networks, volumes, and “depends_on” configuration:

WARNING: Unsupported top level key 'networks' - ignoring
WARNING: Unsupported top level key 'volumes' - ignoring
WARNING: Unsupported key 'depends_on' in services.ui - ignoring
WARNING: Unsupported key 'depends_on' in services.rest_api - ignoring
WARNING: Unsupported key 'volumes' in services.db - ignoring

In the case of networks, you CAN specify a network in your service config and that will result in the network being created, but there’s no facility for using other network drivers in this case. Docker volumes can unfortunately only be enabled when manually creating a service by using the command “docker service create –mount=….” command. The rationale for this apparently is that DABs are for creating portable stacks and at this point there isn’t a real good way for the services in a stack to be portable when it contains a host volume mount.

There’s another caveat with DAB files. They DON’T have a provision for mapping a host port to your container port. Here’s a mongo db service in my generated DAB file:

"db": {
    "Image": "mongo@sha256:e599c71179c2bbe0eab56a7809d4a8d42ddcb625b32a7a665dc35bf5d3b0f7c4",
    "Networks": [
    "Ports": [
            "Port": 27017,
            "Protocol": "tcp"

When you deploy the DAB, it will map some arbitrary host port for your container ports (which you can find via “docker service inspect ”). If you want to have it exposed at a specific port, you need to UPDATE the service as follows:

docker service update --publish-add ‘<host port>:<container port>/<tcp|udp>’

Of course, you could add in a front end proxy such as nginx or HAproxy, but you’d still need to map its ports if you need to reach it at a consistent port number.

I imagine (hope?) things will change in the next release or two to rectify these issues.

As far as Yipee goes, we do have options in our configuration for setting scale, but those really only work out of the box for Kubernetes. The reason for that is that compose/dab/swarm doesn’t provide a declarative method for setting scale. This is only done after the fact once you’ve deployed your app. In Kubernetes, the number of “replicas” itself is part of the configuration (and Kubernetes is built around scheduling containers across clusters).

We’ve got some ideas about how to solve this issue for compose deployments, but we haven’t yet had a chance to flesh them out into a preferred implementation. Options we’ve been considering:

  • Print out instructions to deploy and scale you services
  • Provide a simple script to deploy and then scale your services for you
  • Provide an additional container which uses the Docker APIs to scale your service(s) according to your configured scaling options

We’d be interested in hearing your take on how you are planning on managing your swarm deployments in a post-1.11 environment.

Interested in learning more about Yipee.io? Sign up for free to see how Yipee.io can help your team streamline their development process.