0. Getting started with Docker Containers

elevysi · 01 November 2019 |

Traditionally, software is built within a development environment. Once the developers are done with their work and ready to put it at trial, they usually move the software into a test environment. Once the tests are successful, the software goes into production to be used by the end-user. The packaging and shipping of the software from development to test to production is not always smooth and swift. We often encounter some problems such as:

  • Packaging challenges and portability issues
  • Different working environments (different OS versions)
  • Different configurations and scripts from one environment to another and other organizational matters
  • Different policies such as data access, security constraints
  • Making the new software fit in with the existing components of one environment to the other
  • Ensure different people involved within the process, from development to deployment all speak the same language and are able to run their due part smoothly

These are all factors that can cause considerable problems and lead to a loss of scarce resources that would otherwise be avoided with the use of containers. These will provide solutions to the short-comings listed above through an isolated run environment built on top of a LINUX System. The most popular containerization platform being Docker, its name is revealing on the job it does. The name docker comes from the sea port services, used to describe workers who were loading and unloading items onto and from ships as they dockered at the sea port. Their challenge consisted into finding ways of loading the items in the most space and cost effective ways possible. This was definitely not an easy task as several combinations could result in different outcomes. The problem was ultimately solved by the use of docker machines such as cranes, machines that are ready to accommodate any predictable shape of item, no matter what their content is, and load it into the ship. 

There are two factors we need to focus on:

  1. Predictable : The shape to be loaded into ships needs to have been planned, otherwise the loading will fail. On the sea port, the most common shapes to be loaded into the ships will be these cubic containers.
  2. Worry-free from content: When containers are being loaded into the ship, it does not matter what is contained inside them, as long as the shape of the container has been predicted to be loaded into the ship through a crane for instance.

With such background info, we can guess what the docker platform aims at doing for software. As long as the software to be deployed has been predicted, it does not matter what the software does. For docker, the shape would be any software system intended to run on a Linux System such as  desktop, web, database, mail, … applications (Docker has been extended to also include Windows Server native applications). What the software does is not a concern for the docker platform as long as it has been packaged as a docker image. A good analogy for those familiar with JAVA is the Java Virtual Machine (JVM); java is praised for its portability qualities as JAVA applications are packaged as archives and deployed on any host machine running the JVM.

Docker is an open source project for building, shipping, and running programs. It is a commandline program, a background process, and a set of remote services that take a logistical approach to solving common software problems and simplifying your experience installing, running, publishing, and removing software. It accomplishes this by using an operating system technology called containers.
Jeff N., Stephen K. (2019). ‘Welcome to Docker’, in Manning Publications (ed.). Docker In Action Second Edition. Shelter Island: Manning Publications, pp. 3.

As highlighted in his book Docker in Practice, Ian Miell et. all. outline cases the Docker platfom intends to solve:

  • Replacing Virtual Machines : rather reduce the number of virtual machines needed
  • Prototyping Software : quick and easy builds, installs and deployments
  • Packaging Software : provides shippable software images, able to be run through a single command
  • Enabling a Microservices architecture: migration to microservices is much more easier through containers; isolation mechanisms provide a clean mechanism of working on a service without affecting the other services of the landscape 
  • Enabling full-stack productivity while offline : no need to connectivity to access virtual hosts; own development machine can be used to simulate the production environment
  • Reducing debug overhead: issues are easily debugged, different issues are mastered at by their owners
  • Documenting software dependencies: each image documents its set of dependencies
  • Enabling Continuous delivery: quick and ease of build, install and deployment process, as well as the packaging, shipping and removing the software

Can we replace virtual machines with docker?

There is a distinction with virtual machines and containerization; virtual machines are used to optimize hardware resources by providing virtual hardware on which one can install an OS and run applications. Docker, on the other side, provide isolated containers which interact with the host’s Linux Kernel. 
Docker relies on a Linux Kernel with in-built containerization capabilities; as such Docker can natively be installed on Linux while it requires a light-weight virtual machine installed on top of other operating systems such as OSX or Windows. As such, it is important noting that containers are not intended to replace virtual machines but rather to work next to them. Containers offer the advantage of offering a more agile way of deploying software as compared to installing and firing up a virtual host. One can leverage the same level of isolation offered by dockers through the dedication of a single virtual host to each application. While such an act would provide isolation, this can be pretty resource intensive, on the number of virtual hosts involved, the work induced in setting these up and the time required to finish the virtual host set-up and the software deployment.  

Containers, on the other hand, overcome the lack of optimization by their reusability and their layered storage mechanism; several containers can be deployed on a single virtual machine running docker, and this in a matter of minutes if not seconds. All docker needs are images that can be shipped and run into containers. 

Containers and images.. What really is the difference?

Let’s use an analogy here. Remember learning about classes and objects in those object oriented programming classes? We would learn classes are blueprints or templates used to define which attributes the object would possess and which operations can be executed upon these. On the other hand, the objects were the actual instantiation of the defined classes, i.e. a materialized form of the template defined by the class. Well with such an analogy, images can be mapped to classes while containers would be mapped to objects. We build images that are templates to be instantiated through containers. The container hence needs the image to be built but as with classes, a sole image can be instantiated into one or several containers. As defined by Ian Miell et. all in the book Docker in Practice, containers are running systems defined by images. An image is a collection of filesystem layers and some metadata while a layer is a collection of changes to files.

Let’s investigate further and see how docker is able of achieving such processes by looking at the platform architecture.