AKS Networking – Which model should you choose?

In the previous post, we broke down AKS Architecture Fundamentals — control plane vs data plane, node pools, availability zones, and early production guardrails.

Now we move into one of the most consequential design areas in any AKS deployment:

Networking.

If node pools define where workloads run, networking defines how they communicate — internally, externally, and across environments.

Unlike VM sizes or replica counts, networking decisions are difficult to change later. They shape IP planning, security boundaries, hybrid connectivity, and how your platform evolves over time.

This post takes a look at AKS networking by exploring:

  • The modern networking options available in AKS
  • Trade-offs between Azure CNI Overlay and Azure CNI Node Subnet
  • How networking decisions influence node pool sizing and scaling
  • How the control plane communicates with the data plane

Why Networking in AKS Is Different

With traditional Iaas and PaaS services in Azure, networking is straightforward: a VM or resource gets an IP address in a subnet.

With Kubernetes, things become layered:

  • Nodes have IP addresses
  • Pods have IP addresses
  • Services abstract pod endpoints
  • Ingress controls external access

AKS integrates all of this into an Azure Virtual Network. That means Kubernetes networking decisions directly impact:

  • IP address planning
  • Subnet sizing
  • Security boundaries
  • Peering and hybrid connectivity

In production, networking is not just connectivity — it’s architecture.


The Modern AKS Networking Choices

Although there are some legacy models still available for use, if you try to deploy an AKS cluster in the Portal you will see that AKS offers two main networking approaches:

  • Azure CNI Node Subnet (flat network model)
  • Azure CNI Overlay (pod overlay networking)

As their names suggest, both use Azure CNI. The difference lies in how pod IP addresses are assigned and routed. Understanding this distinction is essential before you size node pools or define scaling limits.


Azure CNI Node Subnet

This is the traditional Azure CNI model.

Pods receive IP addresses directly from the Azure subnet. From the network’s perspective, pods appear as first-class citizens inside your VNet.

How It Works

Each node consumes IP addresses from the subnet. Each pod scheduled onto that node also consumes an IP from the same subnet. Pods are directly routable across VNets, peered networks, and hybrid connections.

This creates a flat, highly transparent network model.

Why teams choose it

This model aligns naturally with enterprise networking expectations. Security appliances, firewalls, and monitoring tools can see pod IPs directly. Routing is predictable, and hybrid connectivity is straightforward.

If your environment already relies on network inspection, segmentation, or private connectivity, this model integrates cleanly.

Pros

  • Native VNet integration
  • Simple routing and peering
  • Easier integration with existing network appliances
  • Straightforward hybrid connectivity scenarios
  • Cleaner alignment with enterprise security tooling

Cons

  • High IP consumption
  • Requires careful subnet sizing
  • Can exhaust address space quickly in large clusters

Trade-offs to consider

The trade-off is IP consumption. Every pod consumes a VNet IP. In large clusters, address space can be exhausted faster than expected. Subnet sizing must account for:

  • node count
  • maximum pods per node
  • autoscaling limits
  • upgrade surge capacity

This model rewards careful planning and penalises underestimation.

Impact on node pool sizing

With Node Subnet networking, node pool scaling directly consumes IP space.

If a user node pool scales out aggressively and each node supports 30 pods, IP usage grows rapidly. A cluster designed for 100 nodes may require thousands of available IP addresses.

System node pools remain smaller, but they still require headroom for upgrades and system pod scheduling.


Azure CNI Overlay

Azure CNI Overlay is designed to address IP exhaustion challenges while retaining Azure CNI integration.

Pods receive IP addresses from an internal Kubernetes-managed range, not directly from the Azure subnet. Only nodes consume Azure VNet IP addresses.

How It Works

Nodes are addressable within the VNet. Pods use an internal overlay CIDR range. Traffic is routed between nodes, with encapsulation handling pod communication.

From the VNet’s perspective, only nodes consume IP addresses.

Why teams choose it

Overlay networking dramatically reduces pressure on Azure subnet address space. This makes it especially attractive in environments where:

  • IP ranges are constrained
  • multiple clusters share network space
  • growth projections are uncertain

It allows clusters to scale without re-architecting network address ranges.

Pros

  • Significantly lower Azure IP consumption
  • Simpler subnet sizing
  • Useful in environments with constrained IP ranges

Cons

  • More complex routing
  • Less transparent network visibility
  • Additional configuration required for advanced scenarios
  • Not ideal for large-scale enterprise integration

Trade-offs to consider

Overlay networking introduces an additional routing layer. While largely transparent, it can add complexity when integrating with deep packet inspection, advanced network appliances, or highly customised routing scenarios.

For most modern workloads, however, this complexity is manageable and increasingly common.

Impact on node pool sizing

Because pods no longer consume VNet IP addresses, node pool scaling pressure shifts away from subnet size. This provides greater flexibility when designing large user node pools or burst scaling scenarios.

However, node count, autoscaler limits, and upgrade surge requirements still influence subnet sizing.


Choosing Between Overlay and Node Subnet

Here are the “TLDR” considerations when you need to make the choice of which networking model to use:

  • If deep network visibility, firewall inspection, and hybrid routing transparency are primary drivers, Node Subnet networking remains compelling.
  • If address space constraints, growth flexibility, and cluster density are primary concerns, Overlay networking provides significant advantages.
  • Most organisations adopting AKS at scale are moving toward overlay networking unless specific networking requirements dictate otherwise.

How Networking Impacts Node Pool Design

Let’s connect this back to the last post, where we said that Node pools are not just compute boundaries — they are networking consumption boundaries.

System Node Pools

System node pools:

  • Host core Kubernetes components
  • Require stability more than scale

From a networking perspective:

  • They should be small
  • They should be predictable in IP consumption
  • They must allow for upgrade surge capacity

If using Azure CNI, ensure sufficient IP headroom for control plane-driven scaling operations.

User Node Pools

User node pools are where networking pressure increases. Consider:

  • Maximum pods per node
  • Horizontal Pod Autoscaler behaviour
  • Node autoscaling limits

In Azure CNI Node Subnet environments, every one of those pods consumes an IP. If you design for 100 nodes with 30 pods each, that is 3,000 pod IPs — plus node IPs. Subnet planning must reflect worst-case scale, not average load.

In Azure CNI Overlay environments, the pressure shifts away from Azure subnets — but routing complexity increases.

Either way, node pool design and networking are a single architectural decision, not two separate ones.


Control Plane Networking and Security

One area that is often misunderstood is how the control plane communicates with the data plane, and how administrators securely interact with the cluster.

The Kubernetes API server is the central control surface. Every action — whether from kubectl, CI/CD pipelines, GitOps tooling, or the Azure Portal — ultimately flows through this endpoint.

In AKS, the control plane is managed by Azure and exposed through a secure endpoint. How that endpoint is exposed defines the cluster’s security posture.

Public Cluster Architecture

By default, AKS clusters expose a public API endpoint secured with authentication, TLS, and RBAC.

This does not mean the cluster is open to the internet. Access can be restricted using authorized IP ranges and Azure AD authentication.

Image: Microsoft/Houssem Dellai

Key characteristics:

  • API endpoint is internet-accessible but secured
  • Access can be restricted via authorized IP ranges
  • Nodes communicate outbound to the control plane
  • No inbound connectivity to nodes is required

This model is common in smaller environments or where operational simplicity is preferred.

Private Cluster Architecture

In a private AKS cluster, the API server is exposed via a private endpoint inside your VNet.

Image: Microsoft/Houssem Dellai

Administrative access requires private connectivity such as:

  • VPN
  • ExpressRoute
  • Azure Bastion or jump hosts

Key characteristics:

  • API server is not exposed to the public internet
  • Access is restricted to private networks
  • Reduced attack surface
  • Preferred for regulated or enterprise environments

Control Plane to Data Plane Communication

Regardless of public or private mode, communication between the control plane and the nodes follows the same secure pattern.

The kubelet running on each node establishes an outbound, mutually authenticated connection to the API server.

This design has important security implications:

  • Nodes do not require inbound internet exposure
  • Firewall rules can enforce outbound-only communication
  • Control plane connectivity remains encrypted and authenticated

This outbound-only model is a key reason AKS clusters can operate securely inside tightly controlled network environments.

Common Networking Pitfalls in AKS

Networking issues rarely appear during initial deployment. They surface later when scaling, integrating, or securing the platform. Typical pitfalls include:

  • subnets sized for today rather than future growth
  • no IP headroom for node surge during upgrades
  • lack of outbound traffic control
  • exposing the API server publicly without restrictions

Networking issues rarely appear on day one. They appear six months later — when scaling becomes necessary.


Aligning Networking with the Azure Well-Architected Framework

  • Operational Excellence improves when networking is designed for observability, integration, and predictable growth.
  • Reliability depends on zone-aware node pools, resilient ingress, and stable outbound connectivity.
  • Security is strengthened through private clusters, controlled egress, and network policy enforcement.
  • Cost Optimisation emerges from correct IP planning, right-sized ingress capacity, and avoiding rework caused by subnet exhaustion.

Making the right (or wrong) networking decisions in the design phase has an effect across each of these pillars.


What Comes Next

At this point in the series, we now understand:

  • Why Kubernetes exists
  • How AKS architecture is structured
  • How networking choices shape production readiness

In the next post, we’ll stay on the networking theme and take a look at Ingress and Egress traffic flows. See you then!

Top Highlights from Microsoft Ignite 2024: Key Azure Announcements

This year, Microsoft Ignite was held in Chigaco for in-person attendees as well as virtually with key sessions live streamed. As usual, the Book of News was released to show the key announcements and you can find that at this link.

From a personal standpoint, the Book of News was disappointing as at first glance there seemed to be very few key annoucements and enhancements being provided for core Azure Infrastructure and Networking.

However, there were some really great reveals that were announced at various sessions throughout Ignite, and I’ve picked out some of the ones that impressed me.

Azure Local

Azure Stack HCI is no more ….. this is now being renamed to Azure Local. Which makes a lot more sense as Azure managed appliances deployed locally but still managed from Azure via Arc.

So, its just a rename right? Wrong! The previous iteration was tied to specific hardware that had high costs. Azure Local now brings low spec and low cost options to the table. You can also use Azure Local in disconnected mode.

More info can be found in this blog post and in this YouTube video.

Azure Migrate Enhancements

Azure Migrate is product that has badly needed some improvements and enhancements given the capabilities that some of its competitors in the market offer.

The arrival of a Business case option enables customers to create a detailed comparison of the Total Cost of Ownership (TCO) for their on-premises estate versus the TCO on Azure, along with a year-on-year cash flow analysis as they transition their workloads to Azure. More details on that here.

There was also an announcement during the Ignite Session around a tool called “Azure Migrate Explore” which looked like it provides you with a ready-made Business case PPT template generator that can be used to present cases to C-level. Haven’t seen this released yet, but one to look out for.

Finally, one that may hae been missed a few months ago – given the current need for customers to migrate from VMware on-premises deployments to Azure VMware Solution (which is already built in to Azure Migrate via either Appliance or RVTools import), its good to see that there is a preview feature around a direct path from VMware to Azure Stack HCI (or Azure Local – see above). This is a step forward for customers who need to keep their workloads on-premises for things like Data Residency requirements, while also getting the power of Azure Management. More details on that one here.

Azure Network Security Perimeter

I must admit, this one confused me a little bit at first glance but makes sense now.

Network Security Perimeter allows organizations to define a logical network isolation boundary for PaaS resources (for example, Azure Storage acoount and SQL Database server) that are deployed outside your organization’s virtual networks.

So, we’re talking about services that are either deployed outside of a VNET (for whatever reason) or are using SKU’s that do not support VNET integration.

More info can be found here.

Azure Bastion Premium

This has been in preview for a while but is now GA – Azure Bastion Premium offers enhanced security features such as private connectivity and graphical recordings of virtual machines connected through Bastion.

Bastion offers enhanced security features that ensure customer virtual machines are connected securely and to monitor VMs for any anomalies that may arise.

More info can be found here.

Security Copilot integration with Azure Firewall

The intelligence of Security Copilot is being integrated with Azure Firewall, which will help analysts perform detailed investigations of the malicious traffic intercepted by the IDPS feature of their firewalls across their entire fleet using natural language questions. These capabilities were launched on the Security Copilot portal and now are being integrated even more closely with Azure Firewall.

The following capabilities can now be queried via the Copilot in Azure experience directly on the Azure portal where customers regularly interact with their Azure Firewalls: 

  • Generate recommendations to secure your environment using Azure Firewall’s IDPS feature
  • Retrieve the top IDPS signature hits for an Azure Firewall 
  • Enrich the threat profile of an IDPS signature beyond log information 
  • Look for a given IDPS signature across your tenant, subscription, or resource group 

More details on these features can be found here.

DNSSEC for Azure DNS

I was surprised by this annoucement – maybe I had assumed it was there as it had been available as an AD DNS feature for quite some time. Good to see that its made it up to Azure.

Key benefits are:

  • Enhanced Security: DNSSEC helps prevent attackers from manipulating or poisoning DNS responses, ensuring that users are directed to the correct websites. 
  • Data Integrity: By signing DNS data, DNSSEC ensures that the information received from a DNS query has not been altered in transit. 
  • Trust and Authenticity: DNSSEC provides a chain of trust from the root DNS servers down to your domain, verifying the authenticity of DNS data. 

More info on DNSSEC for Azure DNS can be found here.

Azure Confidential Clean Rooms

Some fella called Mark Russinovich was talking about this. And when that man talks, you listen.

Designed for secure multi-party data collaboration, with Confidential Clean Rooms, you can share privacy sensitive data such as personally identifiable information (PII), protected health information (PHI) and cryptographic secrets confidently, thanks to robust trust guarantees that safeguard your data throughout its lifecycle from other collaborators and from Azure operators.

This secure data sharing is powered by confidential computing, which protects data in-use by performing computations in hardware-based, attested Trusted Execution Environments (TEEs). These TEEs help prevent unauthorized access or modification of application code and data during use. 

More info can be found here.

Azure Extended Zones

Its good to see this feature going into GA and hopefully will provide a pathway for future AEZ’s in other locations.

Azure Extended Zones are small-footprint extensions of Azure placed in metros, industry centers, or a specific jurisdiction to serve low latency and data residency workloads. They support virtual machines (VMs), containers, storage, and a selected set of Azure services and can run latency-sensitive and throughput-intensive applications close to end users and within approved data residency boundaries. More details here.

.NET 9

Final one and slightly cheating here as this was announced at KubeCon the week before – .NET9 has been announced. Note that this is a STS release with an expiry of May 2026. .NET 8 is the current LTS version with an end-of-support date of November 2026 (details on lifecycles for .NET versions here).

Link to the full release announcement for .NET 9 (including a link to the KubeCon keynote) can be found here.

Conclusion

Its good to see that in the firehose of annoucements around AI and Copilot, there there are still some really good enhancements and improvements coming out for Azure services.

Azure Networking Zero to Hero – Intro and Azure Virtual Networks

Welcome to another blog series!

This time out, I’m going to focus on Azure Networking, which covers a wide range of topics and services that make up the various networking capabilities available within both Azure cloud and hybrid environments. Yes I could have done something about AI, but for those of you who know me, I’m a fan of the classics!

The intention is to have this blog series serve as both a starting point for anyone new to Azure Networking who is looking to start a learning journey towards that AZ-700 certification, or as an easy reference point for anyone looking for a list of blogs specific to the wide scope of services available in the Azure Networking family.

There isn’t going to be a set number of blog posts or “days” – I’m just going to run with this one and see what happens! So with that, lets kick off with our first topic, which is Virtual Networks.

Azure Virtual Networks

So lets start with the elephant in the room. Yes, I have written a blog post about Azure Virtual Networks before – 2 of them actually as part of my “100 Days of Cloud” blog series, you’ll find Part 1 and Part 2 at these links.

Great, so thats todays blog post sorted!!! Until next ti …… OK, I’m joking – its always good to revise and revisit.

After a Resource Group, a virtual network is likely to be the first actual resource that you create. Create a VM, Database or Web App, the first piece of information it asks you for is what Virtual Network to your resource in.

But of course if you’ve done it that way, you’ve done it backwards because you really should have planned your virtual network and what was going to be in it first! A virtual network acts as a private address space for a specific set of resource groups or resources in Azure. As a reminder, a virtual network contains:

  • Subnets, which allow you to break the virtual network into one or more dedicated address spaces or segments, which can be different sizes based on the requirements of the resource type you’ll be placing in that subnet.
  • Routing, which routes traffic and creates a routing table. This means data is delivered using the most suitable and shortest available path from source to destination.
  • Network Security Groups, which can be used to filter traffic to and from resources in an Azure Virtual Network. Its not a Firewall, but it works like one in a more targeted sense in that you can manage traffic flow for individual virtual networks, subnets, and network interfaces to refine traffic.

A lot of wordy goodness there, but the easiest way to illustrate this is using a good old diagram!

Lets do a quick overview:

  • We have 2 Resource Groups using a typical Hub and Spoke model where the Hub contains our Application Gateway and Firewall, and our Spoke contains our Application components. The red lines indicate peering between the virtual networks so that they can communicate with each other.
  • Lets focus on the Spoke resource group – The virtual network has an address space of 10.1.0.0/16 defined.
  • This is then split into different subnets where each of the components of the Application reside. Each subnet has an NSG attached which can control traffic flow to and from different subnets. So in this example, the ingress traffic coming into the Application Gateway would then be allows to pass into the API Management subnet by setting allow rules on the NSG.
  • The other thing we see attached to the virtual network is a Route Table – we can use this to define where traffic from specific sources is sent to. We can use System Routes which are automatically built into Azure, or Custom Routes which can be user defined or by using BGP routes across VPN or Express Route services. The idea in our diagram is that all traffic will be routed back to Azure Firewall for inspection before forwarding to the next destination, which can be another peered virtual network, across a VPN to an on-premises/hybrid location, or straight out to an internet destination.

Final thoughts

Some important things to note on Virtual Networks:

  • Planning is everything – before you even deploy your first resource group, make sure you have your virtual networks defined, sized and mapped out for what you’re going to use them for. Always include scaling, expansion and future planning in those decisions.
  • Virtual Networks reside in a single resource group, but you technically can assign addresses from subnets in your virtual network to resources that reside in different resource groups. Not really a good idea though – try to keep your networking and resources confined within resource group and location boundaries.
  • NSG’s are created using a Zero-Trust model, so nothing gets in or out unless you define the rules. The rules are processed in order of priority (lowest numbered rule is processed first), so you would need to build you rules on top of the default ones (for example, RDP and SSH access if not already in place).

Hope you enjoyed this post, until next time!!

Every new beginning comes from some other beginning’s end – a quick review of 2023

Today is a bit of a “dud day” – post Xmas, post birthdays (me and my son) , but before the start of a New Year and the inevitable return to work.

So, its a day for planning for 2024. And naturally, any planning requires some reflection and a look back on what I achieved over the last year.

Highlights from 2023

If I’m being honest my head was in a bit of a spin at the start of 2023. I was coming off the high of submitting my first pre-recorded content session to Festive Tech Calendar, but also in the back of my mind I knew a change was coming as I’d made the decision to change jobs.

I posted the list of goals above on LinkedIn and Twitter (when it was still called that…) on January 2nd, so lets see how I did:

  • Present at both a Conference and User Group – check!
  • Mentor others, work towards MCT – Mentoring was one of the most fulfilling activities I undertook over the last year. The ability to connect with people in the community who need help, advice or just an outsiders view. Its something I would recommend anyone to do. I also learned that mentoring and training are not connected (I may look at the MCT in 2024) – mentoring is more about asking the right questions, being on the same wavelength as your mentees, and understanding their goals to ensure you are aligning and advising them on the correct path.
  • Go deep on Azure Security, DevOps and DevOps Practices – starting a new job this year with a company that is DevSecOps and IAC focused was definitely a massive learning curve and one that I thoroughly enjoyed!
  • AZ-400 and SC-100 Certs – nope! The one certification I passed this year was AZ-500 but to follow on from the previous point, its not all about exams and certifications. I’d feel more confident have a go at the AZ-400 exam now that I have nearly a year’s experience in DevOps, and its something I’ve been saying for a while now – hiring teams aren’t (well, they shouldn’t be!) interested in tons of certifications, they want to see actual experience in the subject which backs the certification.
  • Create Tech Content – check! I was fortunate to be able to submit sessions to both online events and also present live at Global Azure Dublin and South Coast Summit this year. It was also the year when my first LinkedIn Learning course was published (shameless plug, check it out at this link).
  • Run Half Marathon – Sadly no to this one, I made a few attempts and was a week away from my first half-marathon back in March when my knee decided to give up the ghost. Due to work and family commitments, I never returned to this but its back on the list for 2024.
  • Get back to reading books to relax – This is something we all need to do, turn off that screen at night and find time to relax. I’ve done a mix of Tech and Fiction books and hope to continue this trend for 2024.

By far though, the biggest thing to happen for me this year was when this email landed in my inbox on April Fools Day …..

I thought it was an April Fools joke. And if my head was spinning, you can imagine how fast it was spinning now!

For anyone involved in Microsoft technologies or solutions, being awarded the MVP title is a dream that we all aspire to. It’s recognition from Microsoft that you are not only a subject matter expert in your field, but someone who is looked up to by other community members for content. If we look at the official definition from Microsoft:

The Microsoft Most Valuable Professionals (MVP) program recognizes exceptional community leaders for their technical expertise, leadership, speaking experience, online influence, and commitment to solving real world problems.

I’m honoured to be part of this group, getting to know people that I looked up and still looked up to, who push me to be a better person each and every day.

Onwards to 2024!

So what are my goals for 2024? Well unlike last year where I explicitly said what I was going to do and declared it, this year is different as I’m not entirely sure. But ultimately, it boils down to 3 main questions:

  • What are my community goals?

The first goal is to do enough to maintain and renew my MVP status for another year. I hope I’ve done enough and will keep working up to the deadline, but you never really know! I have another blog post in the works where I’ll talk about the MVP award, what its meant to me and some general advice from my experiences of my first year of the award.

I’ve gotten the bug for Public Speaking and want to submit some more sessions to conferences and user groups over the next year. So plan to submit to some CFS, but if anyone wants to have me on a user group, please get in touch!

I’ve enjoyed mentoring others on their journey, and the fact that they keep coming back means that the mentees have found me useful as well!

Blogging – this is my 3rd blog post of the year, and my last one was in March! I want get some consistency back into blogging as its something I enjoy doing.

  • What are my learning goals?

I think like everyone, the last 12 months have been a whirlwind of Copilots and AI. I plan to immerse myself in that over the coming year, while also growing my knowledge of Azure. Another goal is to learn some Power Platform – its a topic I know very little about, but want to know more! After that, the exams and the certs will come!

  • What are my personal goals?

So unlike last year, I’m not going to declare that I’ll do a half marathon – at least not in public! The plan is to keep reading both tech and fiction books, keep making some time for myself, and to make the most of my time with my family. Because despite how much the job and the community pulls you back in, there is nothing more important and you’ll never have enough family time.

So thats all from me for 2023 – you’ll be hearing from me again in 2024! Hope you’ve all had a good holiday, and Happy New Year to all!

MFA and Conditional Access alone won’t save us from Threat Actors

In the end of a week where we have had 2 very different incidents at high profile organisations across the globe, its interesting to look at these and compare them from the perspective of incident response and the “What we could have done to prevent this from happening” question.

Image Credit – PinClipart

Lets analyze that very question – in the aftermath of the majority of cases, the “What could we have done to prevent this from happening” question invariably leads in to the next question of “What measures can we put in place to prevent this from happening in the future”.

The problem with the 2 questions is that they are reactive and come about only because the incident has happened. And it seems that in both incidents, the required security systems were in place.

Or were they?

A brief analysis of the attacks

  • Holiday Inn

If we take the Holiday Inn attack, the hackers (TeaPea) have said in a statement that:

"Our attack was originally planned to be a ransomware but the company's IT team kept isolating servers before we had a chance to deploy it, so we thought to have some funny [sic]. We did a wiper attack instead," one of the hackers said.

This is interesting because it suggests that the Holiday Inn IT team had a mechanism to isolate the servers in an attempt to contain the attack. The problem was that once the attackers were inside their systems and they realized that the initial scope that their attack was based on wasn’t going to work, their focus changed from Cybercriminals who were trying to make a profit to Terrorism, where they decided to just destroy as much data as they could.

Image Credit – Northern Ireland Cyber Security Centre

Essentially, the problem here is two-fold – firstly, you can have a Data Loss Prevention system in place but its not going to report on or block “Delete” actions until its too late or in some cases not at all.

Second, they managed to access the systems using a weak password. So (am I’m making assumptions here), while the necessary defences and intrusion-detection technologies may have been in place, that single crack in the foundations was all it took.

So the how did they get in? The 2 part of their statement shown below explains it all:

TeaPea say they gained access to IHG's internal IT network by tricking an employee into downloading a malicious piece of software through a booby-trapped email attachment.

The criminals then say they accessed the most sensitive parts of IHG's computer system after finding login details for the company's internal password vault. The password was Qwerty1234.

Ouch ….. so the attack originated as a Social Engineering attack.

  • Uber

We know a lot more about the Uber hack and again this is a case of an attack that originated with Social Engineering. Here’s what we know at this point:

  1. The attack started with a social engineering campaign on Uber employees, which yielded access to a VPN, in turn granting access to Uber’s internal network *.corp.uber.com.
  2. Once on the network, the attacker found some PowerShell scripts, one of which contained hardcoded credentials for a domain admin account for Uber’s Privileged Access Management (PAM) solution.
  3. Using admin access, the attacker was able to log in and take over multiple services and internal tools used at Uber: AWS, GCP, Google Drive, Slack workspace, SentinelOne, HackerOne admin console, Uber’s internal employee dashboards, and a few code repositories.

Again, we’re going to work off the assumption (and we need to make this assumption as Uber had been targeted in both 2014 and 2016) that the necessary defences and intrusion detection was in place.

Once the attackers gained access, the big problem here is the one thats highlighted above – hardcoded domain admin credentials. Once they had those, they could then move across the network doing whatever they pleased. And undetected as well, as its not unusual for a domain admin account to have multiple access across the network. And it looks like Uber haven’t learned from their previous mistakes, because as Mackenzie Jackson of GitGuardian reported:

“There have been three reported breaches involving Uber in 2014, 2016, and now 2022. It appears that all three incidents critically involve hardcoded credentials (secrets) inside code and scripts”

So what can we learn?

What these attacks teach us is that we can put as much technology, intrusion and anomaly detection into our ecosystem as we like, but the human element is always going to be the one that fails us. Because as humans, we are fallible. Its not a stick to beat us with (and like most, I do have a lot of sympathy for those users in Uber, Holiday Inn and all of the other companies who have been victim to attakcs that began with Social Engineering).

Do we need constant training and CyberSecurity programmes in our organisations to ensure that our users are aware of these sorts of attacks? Well, they do now at Uber and Holiday Inn but as I said at the start of the article, this will be a reactive measure for these companies.

The thing is though, most of these programmes are put in as “one-offs” in response to an audit where a checkbox is required to say that such user training has been put in place. And once the box has been checked, they’re forgotten about until the next audit is needed.

We can also say that the priveleged account management processes failed in both companies (weak passwords in one, hardcoded credentials in another).

Conclusion

Multi-Factor Authentication. Conditional Access. Microsoft Defender. Anomaly Detection. EDR and XDR. Information Protection. SOC. SIEM. Priveleged Identity Management. Strong Password Policies.

We can tech the absolute sh*t out of our systems and processes, but don’t forget to train and protect the humans in the chain. Because ultimately when they break, the whole system breaks down.

And the Threat Actors out there know this all too well. They know the systems are there, but they need a human to get them past those walls. MFA and Conditional Access can only save us for so long.