Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Provisioning Docker Containers

Save for later
  • 10 min read
  • 06 May 2015

article-image

Docker containers are spreading fast. They're sneaking into our development environments, production servers, and the proliferation of links in this post emphasizes how hot the topic currently is.

Containers encapsulate applications into a portable machine you can easily build, control and ship. It brings most of the modern services just one command away, clean development environments, and agile production infrastructure, to name a few of the benefits.

While getting started is insanely easy, real life applications can be tricky when you try to push a bit on the boundaries. In this post, we're going to study how to provision docker containers, prototyping along the way our very own image builder. Hopefully, by the end of this post, you should get a good idea of the challenges and opportunities involved.

As of today, docker hub features 15,000 images and the most popular one was downloaded 3.588.280 times. We better be good at crafting them!

Configuration

First thing first, we need a convenient way to describe how to build the application. This is what files like travis.yml exactly aim to, so here is a good place to start.

# The official base image to use
language: python
# Container build steps
install:
  # For the sake of genericity, we introduce support for templating
  {% for pkg in dependencies %}
  - pip install {{ pkg }}
  {% endfor %}

# Validating the build
script:
  - pylint {{ project }} tests/
  - nosetests --with-coverage --cover-package {{ project }}

Yaml formatting is also a decent choice, easily processed both by humans and machines (and I think this is something Ansible and Salt get right in configuration management).

I'm also biased toward python for exploration, so here is the code to load the information into our program.

# run (sudo) pip install pyyaml==3.11 jinja2==2.7.3
import jinja2, yaml

def load_manifest(filepath, **properties):
	tpl = jinja2.Template(open('travis.yml').read())
	return yaml.load(tpl.render(**properties))

This setup gives us the simplest configuration interface ever (files), version control for our build, centralized view of container definitions, trivial management, easy integration for future tools like, say, container provisioning.

You can already enjoy those benefits with projects built by hashicorp or with the application container specification. While I plan to borrow a lot of the concepts behind the latter, we don't need this level of precision nor to constrain our code to their layout conventions. Regarding tools like packer, they're oversized here, although we already took some inspiration from them : configuration as template files.

Model

So far so good. We have a nice dictionary, describing a simple application. However I propose to transcribe this structure into a directed graph. It will bring hierarchical order to the steps, and whenever we parallelize them, like independent tasks or tests, we will simply branch out.

class Node(object):

	def __init__(self, tag, **properties):
		# Node will be processed later. The tag provided here will indicate how to
		self.tag = tag
		self.props = properties
		# Children nodes connected to this one
		self.outgoings = []


class Graph(object):

	def __init__(self, startnode):
		self.nodes = [startnode]

	def connect(self, node, *child_nodes):
		for child in child_nodes:
			node.outgoings.append(child)
			self.nodes.append(child)

	def walk(self, node_ptr, callback):
		callback(node_ptr)
		for node in node_ptr.outgoings:
			# Recursively follow nodes
			self.walk(node, callback)

Starting from the data we previously loaded, we finally model our application into a suitable structure.

def build_graph(data, artifact):
	# Initialization
	node_ptr = Node("start", image=data["language"])
	graph = Graph(node_ptr)

	# Provision
	for task in data["install"]:
	    task_node = Node("task", command=task)
	    graph.connect(node_ptr, task_node)
	    node_ptr = task_node

	# Validation, on a different branch
	test_node_ptr = node_ptr
	for test in data["script"]:
	    test_node = Node("test", command=test)
	    graph.connect(node_ptr, test_node)
	    test_node_ptr = test_node

	# Finalization
	graph.connect(node_ptr, Node("commit", repo=artifact))
	return graph

Build Flow

While our implementation is really naive, we now have a convenient structure to work on. Keeping up with our fictional model, the following graph represents the build workflow as a simple Finite State Machine.

provisioning-docker-containers-img-0

!container fsm

Some remarks :

* travis.yml steps, i.e. graph nodes, became events.

* We handle caching like docker build does. A new container is only started when a new task is received.

Pieces begin to come in place. The walk() method of the Graph is a perfect fit to emit events and the state machine is a robust solution to safely manage a container life-cycle with a conditional start. As a bonus point, it decouples the data model and the build process (loosely coupled components are cool).

Execution

In order to focus on provisioning issues instead of programmatic implementations, however, we're going to prefer the _good enough_ Factory below.

# pip install docker-py==1.1.0
import docker

class Factory(object):
	""" Manage the build workflow. """"

	def __init__(self, endpoint=None):
		endpoint = endpoint or os.environ.get('DOCKER_HOST', 'unix://var/run/docker.sock')
		self.conn = docker.Client(endpoint)
		self.container = None

	def start(self, image):
		self.container = self.conn.create_container(image=image, command='sleep 360')
		self.conn.start(self.container['Id'])

	def provision(self, command):
		self.conn.execute(self.container['Id'], command)

	def teardown(self, artifact):
		self.conn.commit(self.container['Id'], repository='my/container', tag='awesome')
		self.conn.stop(self.container['Id'])
		self.conn.remove_container(self.container['Id'])

	def callback(self, node):
		#print("[factory] new step {}: {}".format(node.tag, node.props))
		if node.tag == "start":
			self.start_container(node.props["image"])
		elif node.tag == "task":
			self.provision(node.props["command"])
		elif node.tag == "commit":
			self.teardown_container(node.props["repo"])

We leverage docker exec feature to run commands inside the container. This approach gives us an important asset: 0 requirements on the target to make it work with our project. We're compatible with every container and we have nothing to pre-install, i.e. no overhead and no extra bytes for our final image.

At this point, you should be able to synthetize a cute, completely useless, little python container.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime
data = load_manifest('travis.yml', project='factory', packages=['requests', 'ipython'])
graph = build_graph(data, "test/factory")
graph.walk(graph.nodes[0], Factory().callback)

Getting smarter

As mentioned, docker cli optimizes subsequent builds by skipping previous successful steps, speeding up development workflow. But it also has its flaws. What if we could run commands with strong security guarantees and we know to be pinned at the exact same version, across different run?

Basically, we want reliable, reproducible builds and tools like Snappy and Nix come handy for the task. Both solutions ensures the security and the stability of what we're provisioning, avoiding side effects on/from other unrelated os components.

Going further

Our modest tool takes shape, but we're still lacking an important feature: copying files from the host inside the container (code, configuration files).

The former is straightforward as docker supports mapping volumes. The latter can be solved by what I think is an elegant solution, powered by consul-template and explained below.

* First we build a container full of useful binaries our future other containers may need (at least consul-template).

FROM       scratch
MAINTAINER Xavier Bruhiere <xavier.bruhiere@gmail.com>

# This directory contains the programs
ADD ./tools /tools
# And we expose it to the world
VOLUME /tools

WORKDIR /tools
ENTRYPOINT ["/bin/sh"]



docker build -t factory/toolbox .
# It just needs to exist to be available, not even run
docker create --name toolbox  factory/toolbox

* We make those tools available by mapping the toolbox to the target container. This is in fact a common practice known as data containers.

self.conn.start(self.container['Id'], volumes_from='toolbox')

* Files, optionally being go templates, are grouped inside a directory on the host, along with a configuration specifying where to move them. The project's readme explains it all.

* Finally we insert the following task before the others to perform the copy, rendering templates in the process with values from consul key/value store.

cmd = '/tools/consul-template -config /root/app/templates/template.hcl -consul 192.168.0.17:8500 -once'
task_node = Node("task", command=cmd)
graph.connect(node_ptr, task_node)

We now know how to provide useful binary tools and any parametric file inside the build.

### Base image

Keeping our tools outside the container let us factorize common utilities and avoid fat images. But we can go further and take a special look to the base we're using.

Small images improve download, build speed and therefore are much easier to deal with, both for development and production. Projects like docker-alpine try to define the minimal common ground for applications, while unikernels want to compile and link necessary os components along with the app to produce an artefact ultra specialized (and we can go even further and strip down the final image).

Those philosophies also limit maintenance overhead (less moving parts reduce side effects and unexpected behaviors), attack surface and are especially efficient when keeping a single responsibility per container (not necessarily a single process, though).

Having a common base image is also a good opportunity to solve one and for all some issues with docker defaults, like phusion suggests.

On the other hand, using a common layer for all future builds prevents us from exploiting community creations. Official language images allows one to quickly containerize its application on top of solid ground. As always, it really depends on the use case.

Brainstorm of improvements

What's more, here is a totally non-exhaustive list of ideas to push further our investigation :

  • Container engine agnostic : who knows who will be the big player tomorrow. Instead of a docker client we could implement drivers for [rkt]() or [lxd](). We could also split the Factory into an engine and a provisioner components.
  • Since we fully control the build flow, we could change the graph walker callback into an interactive prompt to manually build, debug and inspect the container.
  • Given multiple apps and remote docker endpoints, builds could be parallel and distributed.
  • We could modify our load_manifest function to recursively load other manifest required.
  • With reusable modules we could share the best ones (much like Ansible-galaxy).
  • Built-in integration tests with the help of docker-compose and third party containers
  • Currently, the container is launched with a sleep command. We could instead place terminus within our toolbox and use it at runtime to gather host information and eventually reuse it in our templates (again, very similar to Salt pillars for example).

Wrapping up

We merely scratched the surface of container provisioning but yet, there are plenty of exciting opportunities for supporting developers' efficiency.

While the fast progresses in container technologies might seem overwhelming, I hope the directions provided here gave you a modest overview of what is happening. There are a lot of open questions and interesting experiments, so I encourage you to be part of it !

About the Author

Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.