Why Containers Work

Consistency. And separation of concerns between configuration-time and run-time.

Listening to the docker community explain containers is like listening to functional programmers explain monads, or LISPers explain macros. Few can convey their magic. Once a person has groked the magic, somehow they simultaneously loose the ability to illuminate a path to understanding for anyone who hasn't already groked the magic.

# Consistency

*nix processes depend on an ecosystem: the kernel mediates access to RAM, CPU, devices, file system, and sockets. The challenges of systems administration are nurturing and grooming the ecosystem that supports processes. Processes finding themselves in inhospitable environments die.

In traditional systems administration, every intervention we make leaves footprints and debris in the ecosystem. After a certain amount of time the processes and their ecosystems diverge sufficiently to make the behavior of the processes unreliable or just plain broken.

Docker images provide a freeze-dried ecosystem custom tailored to the processes they will host. When a container launches from an image, the freeze-dried ecosystem is thawed (sorta: let me stay with the metaphor), and the process takes up residence for as long as necessary to perform its tasks.

The magic is almost entirely down to simple consistency. The consistent ecosystem is perfectly suited to the consistent processes. If anything goes awry over time, we just stop the failing container and restart anew from a pristine state with a newly thawed ecosystem and new process taking up residence therein.

One especially helpful side-effect of constructing specialized and controlled ecosystems is the ease with which those things can be shared with others. If I construct this image carefully, the container you invoke from it will run the same on your computer as it does on mine.

Consistency. That's magic.

# Separation of Concerns (config from runtime)

The structure of images imposes on their authors a lot of up-front work understanding and organizing the necessary ecosystem and the processes it will host. Exactly which versions of which packages are required? Which programming language? Where exactly must the configurations live on the file system? Exactly what permissions are required? Exactly where do the inputs come from and to where exactly do we send the outputs?

The image author is forced to think through the configuration ahead of time. Everything they learn becomes encoded into the ecosystem. You can only share the ecosystem.

Runtime is where our processes can actually interact with the world. And The Operator who types `docker run ... some-image` is empowered to enhance or alter the ecosystem or to launch different processes within that ecosystem, following or ignoring the author's expressed intent. The options provided to `docker run` allow the operator to combine different ecosystems, by filesystem, internal networking, or external networking with the outside world. Each running container takes up its ecological niche and the complete program emerges from the interactions between all these digital organisms.