Its Day 81 of my 100 Days of Cloud journey, and in todays post I’m going to attempt to give an introduction to containers.

I’m building up to look at Kubernetes in later posts but as the saying goes “we need to walk before we can run”, so its important before we dive into container orchestration that we understand the fundamentals of containers.
Containers v Virtualization
Lets start with the comparison that we all make and compare the differences between containers and virtualization. Before that though, lets reverse even further into the mists of time……
- Bare Metal or Physical Servers

Back in the “good old days” (were they really that good?), you needed a Physical Server to run each one of your applications (or if you were really brave, you ran multiple applictions on a single server). These were normally large noisy beasts that took up half a rack in your datacenter. You had a single operating system per machine, and any recovery in general took time as the system needed to be rebuilt in full up to the application layer before any data recovery was performed.
- Virtualization

As processing power and capacity increased, applications running on physical servers were unable to utilise the increased resources available, which left a lot of wasted resources left unused. At this point, Virtualization enabled us to install a hypervisor which ran on the physical servers. This allowed us to create Virtual Machines that ran alongside each other on the physical hardware.
Each VM can run its own unique guest operating system, and different VMs running on the same hypervisor can run different OS versions and versions. The hypervisor assigns resources to that VM from the underlying Physical resource pool based on either static values or dynamic values which would scale up or down based on the resource demands.
The main benefits that virtualization gives:
- The ability to consolidate applications onto a single system, which gave huge cost savings.
- Reduced datacenter footprint.
- Faster Server provisioning and improved backup and disaster recovery timelines.
- In the development lifecycle, where as opposed to another monster server being purchased and configured, a VM could be quickly spun up which mirrored the Production environment and could be used for the different stages of the development process (Dev/QA/Testing etc).
There are drawbacks though, and the main ones are:
- Each VM has separate OS, Memory and CPU resources assigned which adds to resource overhead and storage footprint. So all of that spare capacity we talked about above gets used very quickly.
- Although we talked about the advantage of having separate environments for the development lifecycle, the portability of these applications between the different stages of the lifecycle is limited in most cases to the backup and restore method.
- Containers

Finally, we get to the latest evolution of compute which is Containers. A container is a lightweight environment that can be used to build and securely run applications and their dependencies.
A container works like a virtual machine in that it utilizes the underlying resources offered by the Container Host, but instead of packaging your code with an Operating System, each container only contains the code and dependencies needed to run the application and runs as a process inside the OS Kernel. This means that containers are smaller and more portable, and much faster to deploy and run.
So how do I run Containers?
In On-Premise and test environments, Windows Containers ships on the majority of Windows Client and Server Operating Systems as a built-in feature that is available to use. However, for the majority of people who use containers, Docker is the platform of choice.

Docker is a containerization platform used to develop, ship, and run containers. It doesn’t use a hypervisor, and you can run Docker on your desktop or laptop if you’re developing and testing applications.
The desktop version of Docker supports Linux, Windows, and macOS. For production systems, Docker is available for server environments, including many variants of Linux and Microsoft Windows Server 2016 and above.
When you install Docker on either your Linux or Windows environment, this installs the Docker Engine which contains:
- Docker client – command-line application named docker that provides us with a CLI to interact with a Docker server. The docker command uses the Docker REST API to send instructions to either a local or remote server and functions as the primary interface we use to manage our containers.
- Docker server – The dockerd daemon responds to requests from the client via the Docker REST API and can interact with other daemons. The Docker server is also responsible for tracking the lifecycle of our containers.
- Docker objects – there are several objects that you’ll create and configure to support your container deployments. These include networks, storage volumes, plugins, and other service objects. We’ll take a look at these in the next post when we demo the setup of Docker.
So where do I get the Containers from?
Docker provides the worlds largest respository of container images called Docker Hub. This is a public repository and contains ready made containers from both official vendors (such as WordPress, MongoDB, MariaDB, InfluxDB, Grafana, Jenkins, Tomcat, Apache Server) and also bespoke containers that have been been contributed by developers all over the world.
So there is effectively a Docker Container for every available scenario. And if you need to create one for your own scenario, you just pull the version from the Docker Hub, make your changes and push it back up to Docker Hub and mark it as public and available for use.
But what if I don’t want to store my container images in a public registry?
Thats where the Private Container Registry option comes in. Your organization or team can have access to a private registry where you can store images that are in use in your environment. This is particularly useful when you want to have version control and governance over what images you want to use in your environment.
For example, if you want to run InfluxDB and run the command to pull the InfluxDB container from the Docker Hub, by default you will get the latest stable version (which is 2.2). However, your application may need to use or only support version 1.8, so you need to specify that when pulling from the registry.
Because images are pulled from the Docker Hub by default, you need to specify the location of your Private Container Registry (in https notation) when pulling images.
There are a number of different options for where to store your Private Container Registry:
- Docker Hub allows companies to host directly
- Azure Container Registry
- Amazon Elastic Container Registry
- Google Container Registry
- IBM Container Registry
Conclusion
So thats a brief overview of containers and how Docker is the proprietary software in use for managing them. In the next post, we’ll look at setting up a Docker Host machine and creating an Azure Container Registry to house our private Docker Images.
Hope you enjoyed this post, until next time!
One thought on “100 Days of Cloud – Day 81: Introduction to Containers”