100 Days of Cloud – Day 88: Azure Kubernetes Service

Its Day 88 of my 100 Days of Cloud journey and as promised, in todays post I’ve finally gotten to Azure Kubernetes Service.

On Day 86, we introduced the components that make up Kubernetes, tools used to manage the environment and also some considerations you need to be aware of when using Kubernetes, and in the last post we installed a local Kubernetes Cluster using Minikube.

Today we move on to Azure Kubernetes Service and we’ll look first at how this differs in architecture from an on-premises installation of Kubernetes.

Azure Kubernetes Service

As always lets start with the definition – Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. The operational overhead is offloaded to Azure, and it handles critical tasks such as health monitoring and maintenance.

When you create an AKS cluster, a control plane or master node is automatically created and configured, and provided at no cost as a managed Azure resource. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.

Image Credit: Microsoft

AKS Cluster Nodes are run on Azure Virtual Machines (which can be either Linux or Windows Server 2019), so you can size your nodes based on the storage, CPU, memory and type that you require for your workloads. These are billed as standard VMs so any discounts (including reservations) are automatically applied.

Its important to note though that VM sizes with less than 2 CPUs may not be used with AKS – this is to ensure that the required system required pods and applications can run reliably.

When you scale out the number of nodes, Azure automatically creates and configures the requested number of VMs. Nodes of the same configuration are known as Node Pools and you define the number of nodes required in a pool during initial setup (which we’ll see below).

Azure has the following limits:

  • Maximum of 5000 Clusters per subscription
  • Maximum of 100 Nodes per cluster with Virtual Machine Availability Sets and Basic Load Balancer SKU
  • Maximum of 1000 Nodes per cluster with Virtual Machine Scale Sets and Standard Load Balancer SKU
  • Maximum of 100 Node Pools per cluster
  • Maximum of 250 Pods per node

When you create a cluster using the Azure portal, you can choose a preset configuration to quickly customize based on your scenario. You can modify any of the preset values at any time.

  • Standard – Works well with most applications.
  • Dev/Test – Use this if experimenting with AKS or deploying a test application.
  • Cost-optimized – reduces costs on production workloads that can tolerate interruptions.
  • Batch processing – Best for machine learning, compute-intensive, and graphics-intensive workloads. Suited for applications requiring fast scale-up and scale-out of the cluster.
  • Hardened access – Best for large enterprises that need full control of security and stability.

If we go into the Portal and “Create a Resource”, select “Containers” frm the categories and click on “Create” under Kubernetes Service:

As we can see this throws us into our screen for creating our Cluster. As always, we need to select a Subscription and Resource Group. Down below this is where it gets interesting, and we can see the preset configurations that we described above:

We can see that “Standard ($$)” is selected by default, and if we click on “Learn more and compare presets”, we get a screen showing us details of each option:

I’m going to select “Dev/Test ($)” and click apply to come back to the Basics screen. I now give the Cluster a name and select a region. We can also see that I can select different Kubernetes versions from the dropdown:

Finally on this screen, we select the Node Pool options and can select Node size (you can change the size and select whatever VM size that you need to meet your needs), manual or auto scaling and the Node Count:

We click next and move on to the “Node Pools” screen, where we can add other Node Pools and select encryption options:

The next screen is “Access” where we can specify RBAC access and also AKS-managed Azure AD which controls access using Azure AD Group membership. Note that this option cannot be disabled after it is enabled:

The next screen is Networking and this is where things get interesting – we can use kubenet to create a VNet using default values, or Azure CNI (Container Networking Interface) which allows you to specify a subnet from your own managed Vnets. We can also specify Network policies to define rules for ingress and egress traffic in and out of the cluster.

The next screen is Integrations, where we can integrate with Azure Container Registry and also enable Azure Monitor and Azure Policy.

At this point, we can click Review and Create and go make a cup of tea while thats being created.

And once thats done (the deployment, not the tea….), we can see the Cluster has been created:

One interesting thing to note – the cluster has been created in my “MD-AKS-Test” Resource Group, however a second RG has been created that containes the NSG, Route Table, VNet, Load Balancer, Managed Identity and Scale Set, so its separating the underlying management components from the main cluster resource.

So at thsi point, we need to jump into Cloud Shell and manage the cluster from there. When we launch Cloud Shell and the prompt appears, run:

az aks get-credentials --resource-group MD-AKS-Test --name MD-AKS-Test-Cluster

This sets our cluster as the current context in the Cloud Shell and allows us to run kubectl commands against it. We can now run kubectl get nodes to show us the status of the nodes in our node pool:

At this point, you are ready to deploy an application into your Cluster! You can use the process as described here to create your YAML file and deploy and test the sample Azure Voting App. Once this is deployed, you can check the “Workloads” menu from your cluster in the Portal to see that this is running:

If we click into either of the “azure-vote” deployments, we can see the underlying Pod in place with its internal IP and the node its assigned to:

To delete the cluster, run az aks delete --resource-group MD-AKS-Test --name MD-AKS-Test-Cluster --yes --no-wait.

Azure Kubernetes Service or run your own Kubernetes Cluster?

So this is the million dollar question and there really is no correct answer – it really does depend on your own particular use case.

Lets try to break it down this way – Deploying and operating your own Kubernetes cluster is complex and will require more work to get the underlying technology set up, such as networking, monitoring, identity management and storage.

The flip side is that if you go with AKS its a much faster way to get up and running with Kubernetes and you have full access to technologies such as Azure AD and Azure Key Vault, but you don’t have access to your control plane or master nodes. There is also the cost element to think of as Kubernets can get expensive running in the cloud depending on how much you decide to scale.

Conclusion

So thats a look at Azure Kubernetes Service and also the benefits of running Kubernetes in Azure versus On-Premises.

The last few posts have only really scratched the surface on Kubernetes – there is a lot to learn about the technology and a steep learning curve. One thing for sure is that Kubernetes is a really hot technology right now and there is huge demand for people who have it as a skill.

If you want to follow some folks who know their Kubernetes inside out, the people I would recommend are:

  • Chad Crowell who you can follow on Twitter or his blog. Chad also has an excellent Kubernetes from Scratch course over at CloudSkills.io containing over 30 real world projects to help you ramp up on Kubernetes.
  • Michael Levan who you can follow from all his socials on Linktree and who has published multiple content pieces on his social channels.
  • Richard Hooper (aka Pixel Robots and Microsoft Azure MVP) who you can follow on Twitter or his blog which contains in-depth blog posts and scenarios for AKS. Richard also co-hosts the Azure Cloud Native user group which you can find on Meetup.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 87: Installing and Configuring Kubernetes

Its Day 87 of my 100 Days of Cloud journey and as promised, in todays post I’m going to install and configure Kubernetes locally using Minikube.

In the last post, we listed out all of the components that make up Kubernetes, tools used to manage the environment and also some considerations you need to be aware of when using Kubernetes.

Local Kubernetes – Minikube

We’re going to install Minikube on my Windows Laptop, however you can also install for both Linux and MacOS if thats your preference. These are the requirements to install Minikube:

  • 2 CPUs or more
  • 2GB of free memory
  • 20GB of free disk space
  • Internet connection
  • Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation

So we download the latest stable release of Minikube from here. The installer is a simple install, and will display this screen once completed:

Now, we run an administrative PowerShell session, and run minikube start in order to start our cluster. Note that because I’m running on this on Windows 10, minikube automatically tried to create the cluster in Hyper-V. Therefore, I needed to run minikube start --driver=docker in order to force minikube to use docker to create the cluster.

So we can see from the output above that the cluster has been created successfully. And the eagle-eyed will also notice that we are using Kubernetes version 1.23.3, which is not the latest version. This is because Kubernetes no longer supports Docker as of version 1.24. Full support will be provided up to April 2023 for all versions up to 1.23 that run Docker. I’ve decided to base this build around Docker as I know it, but you can read more about the changes here and how they affect existing deployments here.

So we move on, and the first thing we need to do is install kubectl. You can download this directly by running minikube kubectl -- get po -A which will go off and install the appropriate version for your OS.

We can see that this has listed all of the Cluster services. We can also run minikube dashboard to launch a graphical view of all aspects of our Cluster:

Now that we’re up and running, lets do a sample webserver deployment. So we run the following commands (as we can see, the image is coming from gcr.io which is the Google Container Registry):

kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube --type=NodePort --port=8080

Now lets run kubectl get services hello-minikube to check if the deployment is running:

And if we now look in the Dashboard, we can see that the deployment is running:

Now we can use kubectl port-forward service/hello-minikube 7080:8080 to expose the service on http://localhost:7080/, and when we browse to that we can see the metadata values returned:

And thats effectively it – your local cluster is running. You can try running another image from the Google Container Registry also, the full list of images can be found at the link here.

There are also a number of useful commands listed below that are useful to know when running minikube:

minikube pause – Pause Kubernetes without impacting deployed applications
minikube unpause – Unpause a paused instance
minikube stop – Halt the cluster
minikube config set memory 16384 – Increase the default memory limit (requires a restart)
minikube addons list – Browse the catalog of easily installed Kubernetes services
minikube start -p aged --kubernetes-version=v1.16.1 – Create a second cluster running an older Kubernetes release (this is potentially useful given Docker is no longer supported)
minikube delete --all – Delete all of the minikube clusters

You can find all of the information you need on Minikube including documentation and tutorials here at the official site.

Conclusion

So thats how we can run Kubernetes locally using Minikube. Slight change of plan, I’m going to do the Azure Kubernetes Service install in the next post, as we’ll go in-depth with that and look at the differences in architecture between running Kubernetes locally and in a Cloud Service.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 86: Introduction to Kubernetes

Its Day 86 of my 100 Days of Cloud journey, and in todays post I’m going to give an introduction to Kubernetes.

We introduced Containers on Day 81 and gave an overview of how they work and how they differ in architecture when compared to traditional Bare Metal Physical or Virtual Infrastructure. A container is a lightweight environment that can be used to build and securely run applications and their dependencies. We need container management tools such as Docker to run commands and manage our containers.

Image Credit – Jenny Fong/Docker

Containers Recap

We saw how easy it is to deploy and manage containers during the series where I built a monitoring system using a telegraf agent to pull data into an InfluxDB docker container, and then used a Grafana Container to display metrics from the time series database.

So lets get back to that for a minute and understand a few points about that system:

  • The Docker Host was an Ubuntu Server VM, so we can assume that it ran in a highly available environment – either an on-premises Virtual Cluster such as Hyper-V or VMware or on a Public Cloud VM such as an Azure Virtual Machine or an Amazon EC2 Instance.
  • It took data in from a single datasource, which was brought into a single time series database, which then was presented on a single dashboard.
  • So altogether we had 1 host VM and 2 containers. Because the containers and datasource were static, there was no need for scaling or complex management tasks. The containers were run with persistent storage configured, the underlying apps were configured and after that the system just happily ran.

So in effect, that was a static system that required very little or no management after creation. But we also had no means of scaling it if required.

What if we wanted to build something more complex, like a an application with multiple layers where there is a requirement to scale out apps, and respond to increased demand by deploying more container instances, and to scale back if demand is decreasing?

This is where container orchestration technologies are useful because they can handle this for you. A container orchestrator is a system that automatically deploys and manages containerized apps. It can dynamically respond to changes in the environment to increase or decrease the deployed instances of the managed app. Or, it can ensure all deployed container instances get updated if a new version of a service is released.

And this is where Kubernetes comes in!

Kubernetes Overview

Kubernetes is an open-source platform created by Google for managing and orchestrating containerized workloads. Kubernetes is also known as “K8s”, and can run any Linux container across private, public, and hybrid cloud environments. Kubernetes allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.

The benefits of using Kubernetes are:

Its important to note though that all of these tasks require configuration and a good understanding of the underlying technologies. You need to understand concepts such as virtual networks, load balancers, and reverse proxies to configure Kubernetes networking.

Kubernetes Components

Image Credit – Microsoft

A Kubernetes cluster consists of:

  • A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
  • A master node or control plane manages the worker nodes and the Pods in the cluster

Lets take a look at the components that are contained in each of these components

Control Plane or Master Node

Image Credit – Microsoft

The following services make up the control plane for a Kubernetes cluster:

  • API server – the front end to the control plane in your Kubernetes cluster. All the communication between the components in Kubernetes is done through this API.
  • Backing store – used by Kubernetes to save the complete configuration of a Kubernetes cluster. A key-value store called etcd stores the current state and the desired state of all objects within your cluster.
  • Scheduler – responsible for the assignment of workloads across all nodes. The scheduler monitors the cluster for newly created containers, and assigns them to nodes.
  • Controller manager – tracks the state of objects in the cluster. There are controllers to monitor nodes, containers, and endpoints.
  • Cloud controller manage – integrates with the underlying cloud technologies in your cluster when the cluster is running in a cloud environment. These services can be load balancers, queues, and storage.

Worker Machines or Nodes

Image Credit – Microsoft

The following services run on the Kubernetes node:

  • Kubelet – The kubelet is the agent that runs on each node in the cluster, and monitors work requests from the API server. It monitors the nodes and makes sure that the containers scheduled on each node run, as expected.
  • Kube-proxy – The kube-proxy component is responsible for local cluster networking, and runs on each node. It ensures that each node has a unique IP address.
  • Container runtime – the underlying software that runs containers on a Kubernetes cluster. The runtime is responsible for fetching, starting, and stopping container images.

Pods

Image Credit – Microsoft

Unlike in a Docker environment, you can’t run containers directly on Kubernetes. You package the container into a Kubernetes object called a pod, which is effectively a container with all of the management overhead stripped away and passed back to the Kubernetes Cluster.

A pod can contain multiple containers that make up part of or all of your application, however in general a pod will never contain multiple instances of the same application. So for example, if running a website that requires a database back-end, both of those containers would be packaged into a pod.

A pod also includes information about the shared storage and network configuration, and yaml coded tempates which define how to run the containers in the pod.

Managing your Kubernetes environment

You have a number of options for managing your Kubernetes environment:

  • kubectl – You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. kubectl can be installed on Linux, macOS and Windows platforms.
  • kind – this is used for running Kubernetes on your local device.
  • minikube – similar to kind in that it allows you to run Kubernetes locally.
  • kubeadm – this is used to create and manage kubernetes clusters in a user friendly way.

kubectl is by far the most used in enterprise Kubernetes environments, and you can find more details in the documentation here.

Important Considerations

While Kubernetes provides an orchestration platform that means you can run your clusters and scale as required, there are certain things you need to be aware that it cannot do, such as:

  • Deployment, scaling, load balancing, logging, and monitoring are all optional. You need to configure these and fit these into your specific solution requirements.
  • There is no limit to the tyes of apps that can run – if it can run in a container, it can run on Kubernetes.
  • Kubernetes doesn’t provide middleware, data-processing frameworks, databases, caches, or cluster storage systems.
  • A container runtime such as Docker is required for managing containers.
  • You need to manage the underlying environment that Kubernetes runs on (memory, networking, storage etc), and also manage upgrades to the Kubernetes platform itself.

Azure Kubernetes Service

All of the above considerations and indeed all of the sections we’ve covered in this post require detailed knowledge of both Kubernetes and also the underlying dependencies. This overhead is removed in some part by cloud services such Azure Kubernetes Service (AKS) which reduces these challenges by providing a hosted Kubernetes environment. 

As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes.

You can create an AKS cluster using:

  • The Azure CLI
  • The Azure portal
  • Azure PowerShell
  • Using template-driven deployment options, like Azure Resource Manager templates, Bicep and Terraform.

When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.

Conclusion

And thats a description of Kubernetes, how it works, why its useful and the components that are contained within it. In the next post, we’re going to put all that theory into practice and set up both a local Kubernetes Cluster using minikube, and also look at deploying cluster onto Azure Kubernetes Service.

Hope you enjoyed this post, until next time!