Synopsis

Kubernetes has, as its base deployment construct, something known as a pod. A pod allows a set of containers to share the same network stack and volume mounts, yet still run the container processes in their own isolated space. You can create similar constructs using native Docker commands (docker run, docker-compose up, etc) but you’ll miss out on one very fundamental feature – managing the lifecycle of the containers as a single unit. For certain workloads, the benefits may outweigh this drawback. Read on to learn more.

What is a Container Anyway?

A container consists of a number of namespaces and cgroups which serve to isolate and constrain applications. Namespaces generally are meant to provide the isolation, while cgroups constrain the use of resources such as memory and CPU.

There are a number of different things that can be isolated for an application using namespaces:

  1. Processes: Application processes can be isolated such that they can’t be seen by the host server or by any other containers.
  2. Networks: Similarly networks (nics, IPs, listening TCP/UDP ports, etc) can be isolated from the underlying host and other containers.
  3. Users: Users and groups can be created independently of the underlying hosts’ users and groups.
  4. IPC: Isolates InterProcess Communications.
  5. Mounts: Filesystem mounts can be isolated (and sometimes shared) with the underlying host.
  6. UTS: This isolates hostnames and domain names.
  7. Cgroup: Even the cgroups, which map processes to resource limits, can be isolated from the underlying host.

Thus, a container is an application which runs in a set of such namespaces so that it can run in isolation from host processes and other container processes. The container generally has in its mount namespaces any dependent software needed so that it can function independently of other containers. In Docker, this software gets added to the container filesystems as part of the Docker “build” process.

How is a Pod Different?

Pods are a Kubernetes construct that can aggregate one or more containers. Each of these containers then share network, IPC, UTS, and mounts (volumes) but have isolated users and processes. Having a shared network allows the included containers to communicate with each other via localhost (over loopback interface) or via IPC. This obviates the need for using a service discovery mechanism for inter-container communication.

All containers in the pod are deployed on the same node and have the same lifetime (namely, the lifetime of the pod). The restart policy is likewise specified at the pod level and not per-container.

How Can I Make a Pod Using Docker Commands?

The way that best approximates deploying a pod, is to create a docker-compose file and deploy it using the docker-compose command (docker service or docker stack deploy commands won’t work because many of the features aren’t enabled on docker swarm mode). Below is an example docker-compose file which emulates a pod deployment:

version: "2"

services:
 pause:
   container_name: pause
   image: gcr.io/google_containers/pause:1.0
   networks:
   - pausenet
   volumes:
   - wpvol:/var/www/html
   - mysqldata:/var/lib/mysql
   ports:
   - "8080:80"
  restart: always
 wordpress:
   depends_on:
   - mysql
   - pause
   image: wordpress:latest
   environment:
   - WORDPRESS_DB_HOST=127.0.0.1:3306
   - WORDPRESS_DB_NAME=wpdb
   - WORDPRESS_DB_USER=wpuser
   - WORDPRESS_DB_PASSWORD=mywppasswd
   - affinity:container==pause
   ipc: container:pause
   network_mode: service:pause
   volumes_from:
   - pause
   restart: always
 mysql:
   depends_on:
   - pause 
   image: mysql:latest
   environment:
   - MYSQL_RANDOM_ROOT_PASSWORD=yes
   - MYSQL_USER=wpuser
   - MYSQL_PASSWORD=mywppasswd
   - MYSQL_DATABASE=wpdb
   - affinity:container==pause
   ipc: container:pause
   network_mode: service:pause
   volumes_from:
   - pause
   restart: always

networks:
  pausenet:

volumes:
  wpvol:
  mysqldata:


The configuration in this which enables the pod-like features are:

  1. A “base” container which has the network and volume config. In this case we use Kubernetes “pause” container which simply runs a static binary which pauses and responds to interrupt signals.
  2. The application containers, which inherit the base container’s networks/volumes/IPC by using the following config values:
    1. network_mode: service:pause (tells the containers to share the network stack of the pause service/container)
    2. volumes_from:
      – pause (tells the containers to share the volumes from the pause service)
    3. ipc: container:pause (tells the container to share the IPC from the pause container)
    4. In the environments section, add the following:
      – affinity:container==pause (tells docker-compose to place the app containers on the same nodes at the pause container)

You can also create a pod-like construct using Docker run commands. Below is an example shell script which does the same thing as the compose file above:

#!/bin/sh

docker network create pausenet
docker volume create wpvol
docker volume create mysqldata

docker run -d –name pause –network pausenet -v wpvol:/var/www/html \
                  -v mysqldata:/var/lib/mysql -p “8080:80” –restart always \
gcr.io/google_containers/pause:1.0

docker run -d –name mysql –volumes-from pause –network container:pause \
-e “MYSQL_RANDOM_ROOT_PASSWORD=yes” -e “MYSQL_USER=wpuser” \
-e “MYSQL_PASSWORD=mywppasswd” -e “MYSQL_DATABASE=wpdb” \
–ipc “container:pause” –restart always mysql:latest

docker run -d –name wordpress –volumes-from pause –network container:pause \
-e “WORDPRESS_DB_HOST=127.0.0.1:3306” -e “WORDPRESS_DB_NAME=wpdb” \
-e “WORDPRESS_DB_USER=wpuser” -e “WORDPRESS_DB_PASSWORD=mywppasswd” \
–ipc “container:pause” –restart always wordpress:latest

Note that you don’t need to add the container affinity in this case because docker run command only works in the context of a single node.

To verify that the app containers are indeed sharing the same network and volumes, you can exec into them one at a time (docker exec -it <container name> bash) and run:

 ps -ef (should only show the processes from the relevant container)

ls /var/www/html (should contain the same wordpress files on either container)

ls /var/lib/mysql (should contain the same mysql files on each container)

ss -tlnp (should show ports 3306 (mysql) and 80 (apache/wp) listening on each container)

ip -4 addr (should show the same network interfaces/IP addresses on each container)

So What Don’t I Get That Kubernetes Pods Have?

While you can deploy a “pod” of containers using compose, they aren’t managed as a single unit. If one of the containers fails, there’s no mechanism to restart or redeploy the whole pod automatically. Kubernetes will monitor the pod and restart the pod (all containers together) and redeploy onto another node if necessary. Note that this requires the use of a configured ReplicationController or ReplicaSet to monitor/manage the pod.

Also, Kubernetes allows setting resource constraints in a hierarchical way. Thus you can set limits on CPU/Memory/etc. at the pod level to constrain the resource that can used by the pod and also at the container level to limit the resources each container can use. The pod level resource constraints will always ensure that the combination of constituent containers will never consume more resources than the pod definition allows.

Finally, note that you can only create this pod-like construct on a single host or on a legacy swarm cluster. You can’t create such a setup in Docker’s new swarm-mode.


Interested in learning more about Yipee.io? Sign up for free to see how Yipee.io can help your team streamline their development process.