Azure Container Hosting – which service should you use?

Its Christmas time, and that means its time for another month of the always fantastic Festive Tech Calendar. This was one of the first events that I participated in when I was trying to break into blogging and public speaking and I’m delighted to be involved again this year.

This year, the team are raising funds for Beatson Cancer Charity who raise funds to transform the way cancer care is funded and delivered by funding specialists, research and education to invest in a better future for cancer patients. You can make donations via the Just Giving page.

In this post, I’ll walk through the extensive list of Container hosting options that are available on Azure. I’ll take a look at the Azure-native offerings, include some third-party platforms that run on Azure, and then compare them on performance, scalability, costs, and service limits.

What counts as “Container Hosting” on Azure?

For this post I’m treating a “container hosting option” as:

A service where you can run your own Docker images as workloads, with Azure (or a partner) running the infrastructure.

There are an extensive list of options (and I will exclude a few off the list below, but the main “go-to” options that I’ve seen in architecture discussions are:

  • Azure Container Apps
  • Azure Kubernetes Service (AKS)
  • Azure Container Instances (ACI)
  • Azure App Service (Web Apps for Containers)
  • Azure Service Fabric (with containers)
  • Azure Red Hat OpenShift (ARO) – OpenShift on Azure
  • Kubernetes platforms on Azure VMs or Azure VMware Solution (VMware Tanzu, Rancher, etc.)

But what about the humble reliable Virtual Machine?

OK yes, its still out there as an option – the Virtual Machine with Docker installed to run containers. And its the place where most of us have started on this journey (you can check out a blog series I wrote a few years ago here on the subject of getting started with running Docker on VM’s).

There are still some situations where you will see a need for Virtual Machines to run containers, but as we’ll see in the options below, this has been superseded by the range of offerings available on Azure who can run containers from single instances right up to enterprise level offerings.

Azure Container Instances (ACI)

Lets start with the smallest available form of hosting which is Azure Container Instances. ACI is the “run a container right now without VMs or an orchestrator” service – there are no virtual machines or orchestrators to manage, and containers start within seconds on Azure’s infrastructure. ACI provides a single container or small group of containers (called a container group) on-demand. This simplicity makes it essentially “containers-as-a-service”.

You can run a container by issuing a single Azure CLI command. It’s completely managed by Azure: patching, underlying host OS, and other maintenance are invisible to the user. ACI also supports both Linux and Windows containers.

Its great for short-lived tasks and simple container groups, good examples of this would be Cron-style jobs, build workers, data processing pipelines, and dev/test experiments where you just want a container to run for a bit and then disappear.

Azure App Service (Web Apps for Containers)

Azure App Service (Web App for Containers) is a Platform-as-a-Service offering that lets you deploy web applications or APIs packaged as Docker containers, without managing the underlying servers.

This uses all of the features that you would normally see with App Service – you get deployment slots, auto-scaling, traffic routing, and integrated monitoring with Azure Monitor. The benefit of this is that it abstracts away the container management and focuses on developer productivity for web applications.

The use case of using App Service is the familiarity with the product. Its gives you predictable, reserved capacity and can be used to host HTTP APIs or websites where you don’t want to have the overhead of using Kubernetes, but want to utilise features like deployment slots, built-in auth, easy custom domains, built-in backup & integration.

Azure Container Apps

Azure Container Apps is a fully managed container execution environment, designed specifically for microservices, APIs, and event-driven processing.

It abstracts away the Kubernetes infrastructure and provides a serverless experience for running containers – meaning you can run many containers that automatically scale in response to demand and even scale down to zero when idle.

Container Apps sits on top of Kubernetes (it runs on Azure’s internal K8s with open technologies like KEDA, Dapr, and Envoy) but as a developer you do not directly interact with Kubernetes objects. Instead, you define Container Apps and Azure handles placement, scaling, and routing.

Container Apps is an ideal place for running Microservices, APIs and event-driven jobs where you don’t want to manage Kubernetes, and want to scale-to-zero and only pay when there’s traffic. Its a nice “middle ground” between App Service and full AKS.

Azure Kubernetes Service (AKS)

We’re finally getting to the good stuff!!

Image Source – Microsoft

Azure Kubernetes Service (AKS) is Azure’s flagship container orchestration service, offering a fully managed Kubernetes cluster.

With AKS, you get the standard open-source Kubernetes experience (API, kubectl, and all) without having to run your own Kubernetes control plane – Azure manages the K8s master nodes (API servers, etc.) as a service.

You do manage the worker nodes (agent nodes) in terms of deciding their VM sizes, how many, and when to scale (though Azure can automate scaling).

In terms of ease-of-use, AKS has a steep learning curve if you’re new to containers, because Kubernetes itself is a complex system. Provisioning a cluster is quite easy (via Azure CLI or portal), but operating an AKS cluster effectively requires knowledge of Kubernetes concepts (pods, services, deployments, ingress controllers, config maps, etc.).

It’s less turn-key than the earlier services – you are stepping into the world of container orchestration with maximum flexibility. One of the main benefits of AKS is that it’s not an opinionated PaaS – it’s Kubernetes, so you can run any containerized workload with any configuration that Kubernetes allows.

Another reason for choosing AKS is that you can run it locally in your environment on an Azure Local cluster managed by Azure Arc.

The main reason for choosing AKS is running enterprise or large-scale workloads that need:

  • Full Kubernetes API control
  • Custom controllers, CRDs, service meshes, operators
  • Multi-tenant clusters or complex networking

If you’re already familiar with Kubernetes, this is usually the default choice.

Azure Red Hat OpenShift (ARO)

Azure Red Hat OpenShift (ARO) is a jointly managed offering by Microsoft and Red Hat that provides a fully managed OpenShift cluster on Azure.

OpenShift is Red Hat’s enterprise Kubernetes distribution that comes with additional tools and an opinionated setup (built on Kubernetes but including components for developers and operations). With ARO, Azure handles provisioning the OpenShift cluster (masters and workers) and critical management tasks, while Red Hat’s tooling is layered on top.

It’s a first-class Azure service, but under the covers, it’s Red Hat OpenShift Container Platform. In terms of ease-of-use: for teams already familiar with OpenShift, this is much easier than running OpenShift manually on Azure VMs. The service is managed, so tasks like patching the underlying OS, upgrading OpenShift versions, etc., are handled in coordination with Red Hat.

The use case for ARO comes down to whether you’re an OpenShift customer already, or need OpenShift’s enterprise features (built-in pipelines, operators, advanced multi-tenancy).

Azure Service Fabric

Service Fabric predates AKS and was Azure’s first container orchestrator. I’ve not seen this ever out in the wild but it deserves a mention here as its still available as a container hosting platform on Azure.

Its a mature distributed systems platform from Microsoft, used internally for many Azure services (e.g., SQL DB, Event Hubs). It can orchestrate containers as well as traditional processes (called “guest executables”) and also supports a unique microservices programming model with stateful services and actors where high-throughput is required.

I’m not going to dive too deep into this topic, but the use case for this really is if you already have significant investment in Service Fabric APIs.

Third-party Kubernetes & container platforms on Azure

Beyond the native services above, you can also run a variety of third-party platforms on Azure:

  • Kubernetes distributions on Azure VMs: VMware Tanzu Kubernetes Grid, Rancher, Canonical Kubernetes, etc., deployed directly onto Azure VMs.
  • Azure VMware Solution + Tanzu: run vSphere with Tanzu or Tanzu Kubernetes Grid on Azure VMware Solution (AVS) and integrate with Azure native services.

There are a number of reasons for ignoring the native Azure services and going for a “self-managed” model:

  • If you need a feature that AKS/ARO doesn’t provide (e.g., custom Kubernetes version or different orchestrator, or multi-cloud control plane).
  • If you want to avoid cloud vendor lock-in at the orchestration layer (some companies choose BYO Kubernetes to not depend on AKS specifics).
  • If your organization already invested in those tools (e.g., they use Rancher to manage clusters across AWS, on-prem and also want to include Azure).
  • If you have an on-prem extension scenario: e.g., using VMware Tanzu in private cloud and replicating environment in Azure via AVS to have consistency and easy migration of workloads.
  • Or if you require extreme custom control: e.g., specialized network plugins or kernel settings that AKS might not allow.

Comparison Summary

Lets take a quick comparison summary where you can see at a glance the ease of use, hosting, cost model and use cases of each service:

OptionEase of UseHosting ModelCost ModelBest For
Azure Container InstancesVery High Serverless Pay per second of CPU/Memory, no idle cost.Quick tasks, burst workloads, dev/test, simple APIs.
Azure App Service High PaaSFixed cost per VM instance (scaled-out). Always-on cost (one or more instances).Web apps & APIs needing zero cluster mgmt, CI/CD integration, and auto-scaling.
Azure Container AppsModerate ServerlessPay for resources per execution (consumption model) + optional reserved capacity. Idle = zero cost.Microservice architectures, event-driven processing, varying workloads where automatic scale and cost-efficiency are key.
Azure Kubernetes Service (AKS)Low (for beginners).  Moderate (for K8s proficient teams).Managed Kubernetes (IaaS+PaaS mix)Pay for VMs (nodes) only. Control plane free (standard tier) Complex, large, or custom container deployments
Azure Red Hat OpenShift (ARO)Moderate/Low – easy for OpenShift experts, but more complex than AKS for pure K8s users. Managed OpenShift (enterprise K8s)Pay for VMs + Red Hat surcharge. Higher baseline cost than AKS.Organizations requiring OpenShift’s features (built-in CI, catalog, stricter multi-tenancy) or who have OpenShift on-prem and want cloud parity.
Azure Service FabricLow – steep learning curve IaaS (user-managed VMs) with PaaS runtimePay for VMs No automatic scaling – you manage cluster size.Stateful, low-latency microservices, or mixed workloads (containers + processes). Teams already leveraging SF’s unique capabilities.

Conclusion

As we can see above, Azure offers a rich spectrum of container hosting options.
Serverless and PaaS options cover most workloads with minimal ops overhead, while managed Kubernetes and third-party platforms unlock maximum flexibility at higher complexity.

In my own opinion, the best way to go is to make the decision based on business needs and the core knowledge that exists within your team. Use managed and/or serverless options by default; move to Kubernetes only when needed.

You can use the decision tree shown below as an easy reference to make the decision based on the workload you wish to run.

Image Source – Microsoft

I hope this blog post was useful! For a deeper dive, you can find the official Microsoft guide for choosing a Container hosting service at this link.

100 Days of Cloud – Day 99: Microsoft Build 2022

Its Day 99 of my 100 Days of Cloud journey and in todays post we’ll take a quick look at some of the announcements coming out of Microsoft Build.

Microsoft Build is an annual event that is primarily focused on the development side of the Microsoft ecosystem, however like all Microsoft events there are normally some really cool announcements around new technologies and updates to existing technologies.

I’m going to focus particularly on updates to the technologies that I’ve blogged about over the last 99 days! In effect, I’m providing some updates to the blog posts so that if you’ve followed me on the journey this far, you’ll get to here and have the latest news and features!

Azure Container Apps

Azure Container Apps is now Generally Available. This enables you to run microservices and containerized apps on a serverless platform.

Common uses of Azure Container Apps include:

  • Deploying API endpoints
  • Hosting background processing applications
  • Handling event-driven processing
  • Running microservices

Applications built on Azure Container Apps can dynamically scale based on the following characteristics:

  • HTTP traffic
  • Event-driven processing
  • CPU or memory load

We looked at Azure Container instances on Day 82. The key differences between the 2 are:

  • If you need to spin up multiple container (e.g. front end / backend / database), Azure Container Apps is a better choice as it comes with Dapr (Distributed Application Runtime) and it will auto retry the requests and add some telemetry data.
  • If you just need long running jobs or you don’t need multiple containers to communicate with each other, you can go with Azure Container Instances.

You can check out the blog post announcement here, and the offical Microsoft Docs page here for more information.

Azure Cosmos DB

We looked at Azure Cosmos DB back on Day 64 and learned that it is a fully managed NoSQL database provides high availability, globally-distributed access to data with very low latency. There are a number of APIs to choose from that best meets the needs of your database requirements.

Some of the new featres announced for CosmosDB are:

  • Increased serverless capacity to 1 TB.
  • Shared throughput across database partitions.
  • Support for hierarchical partition keys.
  • An improved 30-day free trial experience, now generally available, and support for MongoDB data in the Azure Cosmos DB Linux desktop emulator.
  • A new, free, continuous backup and point-in-time restore capability enables seven-day data recovery and restoration from accidental deletes
  • Role-based access control support for Azure Cosmos DB API for MongoDB offers enhanced security.

You can find out more about the Cosmos DB enhancements here.

Azure Stack HCI

Its timely that we only looked at Azure Stack HCI on Day 95 and commented that your Azure Stack HCI Cluster can contain between 2 and 16 physical servers.

The new single node Azure Stack HCI, now generally available, fulfills the growing needs of customers in remote locations while maintaining the innovation of native integration with Azure Arc. It offers customers the flexibility to deploy the stack in smaller spaces and with less processing needs, optimizing resources while still delivering quality and consistency.

Additional benefits include:

  • Smaller Azure Stack HCI solutions for environments with physical space constraints or that do not require built-in resiliency, like retail stores and branch offices.
  • A smaller footprint to reduce hardware and operational costs.
  • The same scale applies, so you can start at 1 and scale up to 16 nodes if required.

You can find out more about the AZure Stack HCI announcement here.

Azure Migrate

On Day 18 we looked at Azure Migrate, which is an Azure technology which automates planning and migration of your on-premise servers from Hyper-V, VMware or Physical Server environments.

Enhancements to the service now streamline and simlify cloud migration and modernization:

  • Agentless discovery and grouping of dependent Hyper-V virtual machines (VMs) and physical servers to ensure all required components are identified and included during a move to Azure. This feature is generally available.
  • Azure SQL assessment improvements for better customer experience. Assessments now include recommendations for SQL Server on Azure VMs and support for Hyper-V VMs and physical stacks, along with already existing assessments for Azure SQL Managed Instance and Azure SQL Database. This feature is in preview.
  • Pause and resume of migration function has been included to provide control over the migration window. This mechanism can be used to schedule migrations during off-peak periods. This feature is in preview.
  • Discovery, assessment and modernization of ASP.NET web apps to native Azure Application Service. Customers can discover and modernize an ASP.NET web app to Azure Kubernetes Service (AKS) Application Service Container and discover Java apps running on Apache Tomcat.

Conclusion

So thats a quick rundown of the main updates from Microsoft Build. You can find information on all of the updates that were released here in the Microsoft Build Book of News, and its also not too late to register and watch some of the recorded and on-demand sessions from Microsoft Build by signing up here.

As with all Microsoft Conferences, there’s a CloudSkills Challenge and you have until June 21st to sign up and complete the modules from one of the 8 challenges are available. As always, you can earn a free certification exam pass if you complete the challenge! You can sign up here and the list of rules and exams eligible is here!

Hope you enjoyed this post, until next time!