#42: Containers, Docker, Kubernetes and Serverless - explaining Container Orchestrators and Kubernetes

Over the current few episodes I am introducing a number of technologies from modern Software Delivery.

These are:

  • Containers
  • Docker
  • Kubernetes
  • And Serverless

There are "hot" technologies within Software Development at the moment.

They are helping Software Teams:

  • Get Better ROI when spending on Computer Servers
  • Improving the speed to market
  • Allowing for more complex and ambitious solutions

These are technologies that your Development Team may want to use or may even be using.

In episode 40, I explained Virtualisation; a technology that makes the other possible.

In episode 41, I introduced Containers and the Docker Container format - a technology that allow us to achieve greater ROI from our physical servers, improves our developer productivity and increase simplicity to access the tools they need,

Towards the end of that episode I talked about how containers and the Microservice architecture from episode 17 are logical bed fellows and are gaining huge industry adoption.

The downside to this however is a level of complexity brought by having so many small "parts" to manage.

While it is considerably easier to think about and develop at a small scale ... Linking them together correctly brings additional overheads to the big monolith on a single server setup.

To make this practical we need a Container Orchestrator - the subject of todays episode.

Or listen at:

Published: Wed, 27 May 2020 12:40:43 GMT


Docker containers allowed us to package and distribute our application much easier - but they didn't help with the running of them.

Yes we can start one up easily ... But rarely is any system of importance a single container.

They are more likely to a combination of many containers.

Some growing to the size of hundreds, if not thousands of active containers.

Managing those containers as scale simply was too difficult to be done manually.

We needed something to manage them our behalf. Thus the birth of the Container Orchestrator.

Much like the conductor within a musical orchestra, the Container Orchestrator was responsible for successfully combining our various parts to make a cohesive whole.

The Container Orchestrator is responsible for making sure that our containers are running, healthy, able to communicate with each other correctly and can scale appropriately to the given work load.

So while the container represents the unit of work, the Orchestrator is responsible for making sure that it can do its work in the first place.

The first thing our Orchestrator does is make sure that we have the capacity to start our container.

Rather than look at the capacity of a single physical machine, the Orchestrator will normally have multiple machines at its disposal.

It has the job of spreading the load across all of those machines - both in terms of being able to achieve maximum utilisation, but also for resilience.

By spreading out load across physical machines we start to be resilient to the failure a particular physical machine.

Once the Orchestrator has started the container it is responsible for maintaining its health.

If for any reason the container is unable to run, then the Orchestrator will restart the container.

For example, if one of the physical servers is switched off, then the Orchestrator will attempt to restart affected containers onto another physical servers - space allowing of course.

This can significantly reduce downtime due to system failure. In many cases the Orchestrator can selfheal before anyone actually notices there was a problem.

The Orchestrator also handles all the work of interaction between the individual containers.

Almost every solution will be made of a collection of containers - so their ability to find and communicate with each other effectively is essential.

How well would a business run if we didn't know who our colleagues where and what they did?

Prior to Orchestrators, this task could take considerable manual efforts to configure and maintain - and would commonly be a source of fault.

And finally our Orchestrator gives us the ability to scale the container.

The Orchestrator is responsible for knowing if a single version of the container must be run. Or 10. Or 100. Or as many as is need to handle the current workload - which could be none - all the way up to thousands.

Say for example you have your website in a container.

You would probably want multiple instances of that website available - spread across multiple physical servers or even possibly geographic locations.

The Orchestrator can not only do that, it can also maintain that should a failure occur (as I described in the self healing section).

It can also increases the number of websites if the demand grows beyond the normal.

Say for example you run a TV campaign - I've often seen such events produce a massive short term spike in traffic.

The Orchestrator can react to that spike in real time and spin up additional copies of your website to handle that spike.

The Container Orchestrator bring us ROI by;

  • Ensuring consistent load across our physical machines - giving better density
  • Starting and stopping containers based on load - again giving better density and being more resilience to inconsistent workloads
  • Self healing to avoid costly and embarrassing outages
  • Proving a platform to allow multiple containers to operate as single system
  • And automating many, previously, manual tasks

Today Container Orchestration is largely thought about in terms of a product called Kubernetes.

In the same way the brand Hoover became synonymous with vacuum cleaners, Kubernetes is synonymous with Container Orchestration.

While there are and continue to be other forms of Container Orchestration, Kubernetes is accepted as the defacto winner.

It was develop by Google and is based on their own internal systems for handle workloads at huge scale.

Since its release it has flourished as the defacto tool for running containers.

It has received considerable investment by manly large tech organisations - such as Microsoft and Amazon - and will continue to grow in its capabilities for the foreseeable future.

Content 9

In the next episode I will move onto the Serverless technology. I will also touch on these technologies in the Cloud and the future of these technologies as I foresee it.