100 Days of Cloud – Day 86: Introduction to Kubernetes

Its Day 86 of my 100 Days of Cloud journey, and in todays post I’m going to give an introduction to Kubernetes.

We introduced Containers on Day 81 and gave an overview of how they work and how they differ in architecture when compared to traditional Bare Metal Physical or Virtual Infrastructure. A container is a lightweight environment that can be used to build and securely run applications and their dependencies. We need container management tools such as Docker to run commands and manage our containers.

Image Credit – Jenny Fong/Docker

Containers Recap

We saw how easy it is to deploy and manage containers during the series where I built a monitoring system using a telegraf agent to pull data into an InfluxDB docker container, and then used a Grafana Container to display metrics from the time series database.

So lets get back to that for a minute and understand a few points about that system:

  • The Docker Host was an Ubuntu Server VM, so we can assume that it ran in a highly available environment – either an on-premises Virtual Cluster such as Hyper-V or VMware or on a Public Cloud VM such as an Azure Virtual Machine or an Amazon EC2 Instance.
  • It took data in from a single datasource, which was brought into a single time series database, which then was presented on a single dashboard.
  • So altogether we had 1 host VM and 2 containers. Because the containers and datasource were static, there was no need for scaling or complex management tasks. The containers were run with persistent storage configured, the underlying apps were configured and after that the system just happily ran.

So in effect, that was a static system that required very little or no management after creation. But we also had no means of scaling it if required.

What if we wanted to build something more complex, like a an application with multiple layers where there is a requirement to scale out apps, and respond to increased demand by deploying more container instances, and to scale back if demand is decreasing?

This is where container orchestration technologies are useful because they can handle this for you. A container orchestrator is a system that automatically deploys and manages containerized apps. It can dynamically respond to changes in the environment to increase or decrease the deployed instances of the managed app. Or, it can ensure all deployed container instances get updated if a new version of a service is released.

And this is where Kubernetes comes in!

Kubernetes Overview

Kubernetes is an open-source platform created by Google for managing and orchestrating containerized workloads. Kubernetes is also known as “K8s”, and can run any Linux container across private, public, and hybrid cloud environments. Kubernetes allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.

The benefits of using Kubernetes are:

Its important to note though that all of these tasks require configuration and a good understanding of the underlying technologies. You need to understand concepts such as virtual networks, load balancers, and reverse proxies to configure Kubernetes networking.

Kubernetes Components

Image Credit – Microsoft

A Kubernetes cluster consists of:

  • A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
  • A master node or control plane manages the worker nodes and the Pods in the cluster

Lets take a look at the components that are contained in each of these components

Control Plane or Master Node

Image Credit – Microsoft

The following services make up the control plane for a Kubernetes cluster:

  • API server – the front end to the control plane in your Kubernetes cluster. All the communication between the components in Kubernetes is done through this API.
  • Backing store – used by Kubernetes to save the complete configuration of a Kubernetes cluster. A key-value store called etcd stores the current state and the desired state of all objects within your cluster.
  • Scheduler – responsible for the assignment of workloads across all nodes. The scheduler monitors the cluster for newly created containers, and assigns them to nodes.
  • Controller manager – tracks the state of objects in the cluster. There are controllers to monitor nodes, containers, and endpoints.
  • Cloud controller manage – integrates with the underlying cloud technologies in your cluster when the cluster is running in a cloud environment. These services can be load balancers, queues, and storage.

Worker Machines or Nodes

Image Credit – Microsoft

The following services run on the Kubernetes node:

  • Kubelet – The kubelet is the agent that runs on each node in the cluster, and monitors work requests from the API server. It monitors the nodes and makes sure that the containers scheduled on each node run, as expected.
  • Kube-proxy – The kube-proxy component is responsible for local cluster networking, and runs on each node. It ensures that each node has a unique IP address.
  • Container runtime – the underlying software that runs containers on a Kubernetes cluster. The runtime is responsible for fetching, starting, and stopping container images.

Pods

Image Credit – Microsoft

Unlike in a Docker environment, you can’t run containers directly on Kubernetes. You package the container into a Kubernetes object called a pod, which is effectively a container with all of the management overhead stripped away and passed back to the Kubernetes Cluster.

A pod can contain multiple containers that make up part of or all of your application, however in general a pod will never contain multiple instances of the same application. So for example, if running a website that requires a database back-end, both of those containers would be packaged into a pod.

A pod also includes information about the shared storage and network configuration, and yaml coded tempates which define how to run the containers in the pod.

Managing your Kubernetes environment

You have a number of options for managing your Kubernetes environment:

  • kubectl – You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. kubectl can be installed on Linux, macOS and Windows platforms.
  • kind – this is used for running Kubernetes on your local device.
  • minikube – similar to kind in that it allows you to run Kubernetes locally.
  • kubeadm – this is used to create and manage kubernetes clusters in a user friendly way.

kubectl is by far the most used in enterprise Kubernetes environments, and you can find more details in the documentation here.

Important Considerations

While Kubernetes provides an orchestration platform that means you can run your clusters and scale as required, there are certain things you need to be aware that it cannot do, such as:

  • Deployment, scaling, load balancing, logging, and monitoring are all optional. You need to configure these and fit these into your specific solution requirements.
  • There is no limit to the tyes of apps that can run – if it can run in a container, it can run on Kubernetes.
  • Kubernetes doesn’t provide middleware, data-processing frameworks, databases, caches, or cluster storage systems.
  • A container runtime such as Docker is required for managing containers.
  • You need to manage the underlying environment that Kubernetes runs on (memory, networking, storage etc), and also manage upgrades to the Kubernetes platform itself.

Azure Kubernetes Service

All of the above considerations and indeed all of the sections we’ve covered in this post require detailed knowledge of both Kubernetes and also the underlying dependencies. This overhead is removed in some part by cloud services such Azure Kubernetes Service (AKS) which reduces these challenges by providing a hosted Kubernetes environment. 

As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes.

You can create an AKS cluster using:

  • The Azure CLI
  • The Azure portal
  • Azure PowerShell
  • Using template-driven deployment options, like Azure Resource Manager templates, Bicep and Terraform.

When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.

Conclusion

And thats a description of Kubernetes, how it works, why its useful and the components that are contained within it. In the next post, we’re going to put all that theory into practice and set up both a local Kubernetes Cluster using minikube, and also look at deploying cluster onto Azure Kubernetes Service.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 85: Security for Azure Containers

Its Day 85 of my 100 Days of Cloud journey, and in todays post I’m looking at the options for Container Security in Azure.

Image Credit: Docker Saigon Github

We looked at an overview of Containers on Day 81, how they work like a virtual machines in that they utilize the underlying resources offered by the Container Host, but instead of packaging your code with an Operating System, each container only contains the code and dependencies needed to run the application and runs as a process inside the OS Kernel. This means that containers are smaller and more portable, and much faster to deploy and run.

We need to secure Containers in the same way as we would any other services running on the Public Cloud. Lets take a look at the different options that are available to us for securing Containers.

Use a Private registry

Containers are built from images that are stored in either public repositories such as Docker Hub, a private registry such as Docker Trusted Registry, which can be installed on-premises or in a virtual private cloud, or a cloud-based private registry such as Azure Container Registry.

Like all software that is publicly available on the internet, a publicly available container image does not guarantee security. Container images consist of multiple software layers, and each software layer might have vulnerabilities.

To help reduce the threat of attacks, you should store and retrieve images from a private registry, such as Azure Container Registry or Docker Trusted Registry. In addition to providing a managed private registry, Azure Container Registry supports service principal-based authentication through Azure Active Directory for basic authentication flows. This authentication includes role-based access for read-only (pull), write (push), and other permissions.

Ensure that only approved images are used in your environment

Allow only approved container images. Have tools and processes in place to monitor for and prevent the use of unapproved container images. One option is to control the flow of container images into your development environment. For example, you only allow a single approved Linux distribution as a base image in order to minimize the surface for potential attacks.

Another option is to utilize Azure Container Registry support for Docker’s content trust model, which allows image publishers to sign images that are pushed to a registry, and image consumers to pull only signed images.

Monitoring and Scanning Images

Use solutions that have the ability to scan container images in a private registry and identify potential vulnerabilities. Azure Container Registry optionally integrates with Microsoft Defender for Cloud to automatically scan all Linux images pushed to a registry to detect image vulnerabilities, classify them, and provide remediation guidance.

Credentials

Credential management is one of the most basic tyes of security. Because containers can spread across several clusters and Azure regions, you need to ensure that you have secure credentials required for logins or API access, such as passwords or tokens.

Using tools such as TLS encryption for secrets data in transit, least-privilege Azure role-based access control (Azure RBAC), and Azure Key Vault to securely store encryption keys and secrets (such as certificates, connection strings, and passwords) for containerized applications.

Removing unneeded privileges from Containers

You can also minimize the potential attack surface by removing any unused or unnecessary processes or privileges from the container runtime. Privileged containers run as root. If a malicious user or workload escapes in a privileged container, the container will then run as root on that system.

Enable Auditing Logging for all Container administrative user access

Use native Azure Solutions to maintain an accurate audit trail of administrative access to your container ecosystem. These logs might be necessary for auditing purposes and will be useful as forensic evidence after any security incident. Azure solutions include:

  • Integration of Azure Kubernetes Service with Microsoft Defender for Cloud to monitor the security configuration of the cluster environment and generate security recommendations
  • Azure Container Monitoring solution
  • Resource logs for Azure Container Instances and Azure Container Registry

Conclusion

So thats a brief overview of how we can secure containers running in Azure and ensure that we are only using approved images that have been scanned for vulnerabilities.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 82: Options for Managing Containers in Azure

Its Day 82 of my 100 Days of Cloud journey, and in todays post I’m going to look at options for managing Containers in Azure.

In the last post, we looked at the comparison between Bare Metal or Physical Servers, Virtual Servers and Containers and the pros and cons of each.

We also introducted Docker, which is the best known method of managing containers using the Docker Engine and built-in Docker CLI for command management.

The one thing we didn’t show was how to install Docker or use any of the commands to manage our containers. This is because I’ve previously blogged about this and you can find all of the details as part of my series about Monitoring with Grafana and InfluxDB using Docker Containers. Part 1 shows how you can create your Docker Host running on an Ubuntu Server VM (this could also run on a Bare Metal Physical Server), and Part 2 shows the setup and configuration of Docker Containers that have been pulled from Docker Hub. So head over there and check that out, but don’t forget to come back here!

Docker Context

By default when running any Docker commands from the CLI, Docker automatically assumes that you wish to use the local Docker Host for storing and running your containers. However, you can manage multiple Docker or Kubernetes hosts or nodes by specifying contexts. A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The docker context command makes it easy to configure these contexts and switch between them.

In short, this means that you can manage container instances that are installed on multiple hosts and/or multiple cloud providers from a single Docker CLI.

Let take a look at the different options for managing containers in Azure.

The Docker CLI Method

In order to use containers in Azure using Docker, we first need to log on to Azure using the docker login azure command, which will prompt us for Azure credentials. Once entered, this will return “login succeeded”:

We then need to create a context by running the docker context create aci command. This will associate Docker with an Azure subscription and resource group that you can use to create and manage container instances. So we would run docker context create aci myacicontext to create a context called myacicontext.

This will select your Azure subscription ID, then prompt to select an existing resource group or create a new resource group. If you choose a new resource group, it’s created with a system-generated name. Like all Azure resources, Azure container instances must be deployed into a resource group

Once thats completed, we then run docker context use myacicontext – this ensures that any subsequent commands will run in this context. We can now use docker run to deploy containers into our Azure resource group and manage these using the Azure CLI. So lets run the following command to deploy a quickstart container runing Node.js that will give us a static website:

docker run -p 80:80 mcr.microsoft.com/azuredocs/aci-helloworld

We can now run docker ps to see the running container and get the Public IP that we can use to browse to it:

And if we log onto the Portal, we can see our running container:

So as we’ve always done, lets remember to remove the container by running docker stop sweet-chatterjee, and then docker rm sweet-chatterjee. These commands stops and deletes the Azure Container Instance:

Finally, run docker ps to ensure the container has stopped and is no longer running.

The Azure Portal Method

There are multiple ways to create and manage containers natively in Azure. We’ll look at the portal method in this post, and reference the remaining options at the end of the page.

To create the container, log on to the Portal and select Container Instances from the Marketplace:

Once we select create, we are brought into the now familiar screen for creating resources in Azure:

One important thing to note on this screen is the “Image Source” option – we can select container images from either:

  • The quickstarts that are available in Azure.
  • Images stored in your Azure Container Registry.
  • Other registry – this can be Docker or other public or private container registry.

On the “Networking” screen, we need to specify a public DNS name for our container, and also the ports we wish to expose across the Public Internet

And once thats done, we click “Review and Create” to deploy our container:

Once thats done, we can see the FQDN or Public IP that we can use to browse to the container:

As always, make sure to stop and delete the container instance once finished if you are running these in a test environment.

There are a total of four other options in Azuire for creating and managing containers:

Conclusion

So thats a look at how we can create and manage Azure Container Instances using both Docker CLI and the wide range of options available in Azure.

Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. You can find all of the documentation on Azure Container Instances here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 81: Introduction to Containers

Its Day 81 of my 100 Days of Cloud journey, and in todays post I’m going to attempt to give an introduction to containers.

Containers. But not the sort we’re going to talk about…..

I’m building up to look at Kubernetes in later posts but as the saying goes “we need to walk before we can run”, so its important before we dive into container orchestration that we understand the fundamentals of containers.

Containers v Virtualization

Lets start with the comparison that we all make and compare the differences between containers and virtualization. Before that though, lets reverse even further into the mists of time……

  • Bare Metal or Physical Servers
Image Credit – Rick Vanover/Veeam Software

Back in the “good old days” (were they really that good?), you needed a Physical Server to run each one of your applications (or if you were really brave, you ran multiple applictions on a single server). These were normally large noisy beasts that took up half a rack in your datacenter. You had a single operating system per machine, and any recovery in general took time as the system needed to be rebuilt in full up to the application layer before any data recovery was performed.

  • Virtualization
Image Credit – Rick Vanover/Veeam Software

As processing power and capacity increased, applications running on physical servers were unable to utilise the increased resources available, which left a lot of wasted resources left unused. At this point, Virtualization enabled us to install a hypervisor which ran on the physical servers. This allowed us to create Virtual Machines that ran alongside each other on the physical hardware.

Each VM can run its own unique guest operating system, and different VMs running on the same hypervisor can run different OS versions and versions. The hypervisor assigns resources to that VM from the underlying Physical resource pool based on either static values or dynamic values which would scale up or down based on the resource demands.

The main benefits that virtualization gives:

  1. The ability to consolidate applications onto a single system, which gave huge cost savings.
  2. Reduced datacenter footprint.
  3. Faster Server provisioning and improved backup and disaster recovery timelines.
  4. In the development lifecycle, where as opposed to another monster server being purchased and configured, a VM could be quickly spun up which mirrored the Production environment and could be used for the different stages of the development process (Dev/QA/Testing etc).

There are drawbacks though, and the main ones are:

  1. Each VM has separate OS, Memory and CPU resources assigned which adds to resource overhead and storage footprint. So all of that spare capacity we talked about above gets used very quickly.
  2. Although we talked about the advantage of having separate environments for the development lifecycle, the portability of these applications between the different stages of the lifecycle is limited in most cases to the backup and restore method.
  • Containers
Image Credit – Jenny Fong/Docker

Finally, we get to the latest evolution of compute which is Containers. A container is a lightweight environment that can be used to build and securely run applications and their dependencies.

A container works like a virtual machine in that it utilizes the underlying resources offered by the Container Host, but instead of packaging your code with an Operating System, each container only contains the code and dependencies needed to run the application and runs as a process inside the OS Kernel. This means that containers are smaller and more portable, and much faster to deploy and run.

So how do I run Containers?

In On-Premise and test environments, Windows Containers ships on the majority of Windows Client and Server Operating Systems as a built-in feature that is available to use. However, for the majority of people who use containers, Docker is the platform of choice.

Docker is a containerization platform used to develop, ship, and run containers. It doesn’t use a hypervisor, and you can run Docker on your desktop or laptop if you’re developing and testing applications.

The desktop version of Docker supports Linux, Windows, and macOS. For production systems, Docker is available for server environments, including many variants of Linux and Microsoft Windows Server 2016 and above.

When you install Docker on either your Linux or Windows environment, this installs the Docker Engine which contains:

  • Docker client – command-line application named docker that provides us with a CLI to interact with a Docker server. The docker command uses the Docker REST API to send instructions to either a local or remote server and functions as the primary interface we use to manage our containers.
  • Docker server – The dockerd daemon responds to requests from the client via the Docker REST API and can interact with other daemons. The Docker server is also responsible for tracking the lifecycle of our containers.
  • Docker objects – there are several objects that you’ll create and configure to support your container deployments. These include networks, storage volumes, plugins, and other service objects. We’ll take a look at these in the next post when we demo the setup of Docker.

So where do I get the Containers from?

Docker provides the worlds largest respository of container images called Docker Hub. This is a public repository and contains ready made containers from both official vendors (such as WordPress, MongoDB, MariaDB, InfluxDB, Grafana, Jenkins, Tomcat, Apache Server) and also bespoke containers that have been been contributed by developers all over the world.

So there is effectively a Docker Container for every available scenario. And if you need to create one for your own scenario, you just pull the version from the Docker Hub, make your changes and push it back up to Docker Hub and mark it as public and available for use.

But what if I don’t want to store my container images in a public registry?

Thats where the Private Container Registry option comes in. Your organization or team can have access to a private registry where you can store images that are in use in your environment. This is particularly useful when you want to have version control and governance over what images you want to use in your environment.

For example, if you want to run InfluxDB and run the command to pull the InfluxDB container from the Docker Hub, by default you will get the latest stable version (which is 2.2). However, your application may need to use or only support version 1.8, so you need to specify that when pulling from the registry.

Because images are pulled from the Docker Hub by default, you need to specify the location of your Private Container Registry (in https notation) when pulling images.

There are a number of different options for where to store your Private Container Registry:

  • Docker Hub allows companies to host directly
  • Azure Container Registry
  • Amazon Elastic Container Registry
  • Google Container Registry
  • IBM Container Registry

Conclusion

So thats a brief overview of containers and how Docker is the proprietary software in use for managing them. In the next post, we’ll look at setting up a Docker Host machine and creating an Azure Container Registry to house our private Docker Images.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 75: Create your Microsoft 365 tenant and configure Azure AD Connect

Its Day 75 of my 100 Days of Cloud journey, and today I’m looking at how Azure AD Connect is configured and how it synchronizes your on-premise identities to the Azure AD Tenant for use in Microsoft 365.

But first up, lets take a look at how we can create our Microsoft 365 tenancy and get it configured for use with our domains so its ready for use.

Create your Microsoft 365 Tenant

To create your tenant, you need to browse to the Office 365 E3 product page and click on the “Free Trial” option. E3 is the default trial option as it gives you the best experience of all the tools available for 30 days:

Clicking on “Free Trial” brings you into the registration screen. You need to enter the first email address you want to use with the tenant. This won’t configure anything, its just checking that the domain isn’t already configured for Microsoft 365.

We can also see on the right of the screen whats included with the E3 license, plus the benefits. We are allowed up to 25 users for the trial, this is a good number for testing.

Click on “Create New Account”:

This brings you into the “Tell us about yourself” screen where you need to answer some questions about your organisation:

When you click next, you are asked to verify your identity via SMS or Call:

Once you get verified, you are then prompted to add your domain name. Add the domain name. Try to use the same domain name as the primary email domain that you want to use with the tenant.

Click “Next” and this will create your account. Once that completes, the screen below will appear and you can click on “Manage your subscription” to log in

And now you are logged into the Microsoft 365 admin center and can manage your subscription! This is always available to log on to at https://admin.microsoft.com/ using the credentials you created above.

NOTEThe tenant I have set up above is a trial and I’m only going to use it for testing and for the purposes of the blog. So at some point in the next 30 days, I’m going to delete it as I’ve done with Azure resources in previous blog posts and I would advise you to do so as well (unless you really want to pay for your own Microsoft 365 tenant). The article here shows how to do this from within the Microsoft 365 Services and subscriptions page.

Add your Domain to your tenant

So lets fast forward 30 days – the trial has ended, your users are happy and you’ve decided as a business to migrate your existing workloads to Microsoft 365. The next step is to add your production domain and verify it.

So in the Microsoft 365 admin center, go to the “Settings” menu and select “Domains”. You have the option here to buy a domain which will redirect you to a 3rd party provider, and you can only use this option once your trial period has ended. This is useful if you need

However, we’re going to add our existing domain, so click on “Add Domain”

This brings us into the “Add Domain” screen. Enter the domain name you want to use and click on the “Use this Domain” button at the bottom of the screen:

The next screen provides a list of options for verifying the domain. Now, because the blog is on WordPress, its giving me the option to sign in to WordPress to verify. Unless your Website is hosted on WordPress, you’re not going to see this option, but wil see the 3 options below that.

The most common is the option to “Add a TXT record to the domain’s DNS records”, so we’ll select that and click “Continue”:

This detects who the hosting provider is, and provides you with the TXT record you need to add to your public DNS Records, so I’ll do that in the background and click “Verify” (this may take up to 30 minutes after you add the TXT to work):

Once thats verified, we get a screen asking us to connect our domain and set up DNS records. Again, I’m seeing the option to let Microsoft add the records for me automatically to WordPress (and this may also work depending on who your hosting provider is), however I’m going to choose the second option to add my own DNS records so we can take a look at whats provided:

The next screen gives me the MX Records I need to get set up with email initially, and there are also options for Skype for Business and Intune MDM at the bottom of the screen if required.

I wanted to show you this page to ensure you understand the process and how it works. However at this stage, I’m going to go back to the previous screen and click “Skip and do this later”. The reason is that this will impact mailflow, and our configuration doesn’t have a Hybrid configuration in place yet to support the mailflow.

Once we finish, we get a screen to say the setup is complete, and we can see our domain listed in the admin center.

Azure AD Connect Installation

Once your domain is registered in the portal, you should now be in a position to synchronise your user accounts so its time to install and configure Azure AD Connect.

To do this, we go to the “Users” menu and select “Active Users”. Once that screen appears, we click on the “ellipses” and select “Directory synchronization”:

This brings us to a screen with an external link to download the Azure AD Connect tool:

At the time of writing this post, the current Azure AD Connect version is 2.1.1.0 and is only supported on Windows Server 2016 and Windows Server 2019. There are a number of other prerequisistes that need to be satisfied before installing Azure AD Connect:

  • Azure AD Tenant: this is created for you when you sign up for the Microsoft 365 Trial.
  • Domain needs to be verified: we’ve done this above.
  • The on-premise Active Directory forest and domain functional levels must be Windows Server 2003 or later. The domain controllers can run any level as long as this condition is met. This also means that you don’t need to install Azure AD Connect on a Domain Controller.
  • The Domain Controller used by Azure AD during the setup must be writable and not a read-only domain controller (RODC). Even though you may have other writable domain controllers in your environment, Azure AD doesn’t support write redirects.
  • Enabling the Active Directory recycle bin is recommended.
  • The PowerShell execution policy neds to be set to “RemoteSigned” on the Server that Azure AD Connect is installed on.
  • Installing on Windows Server Core is not supported.
  • Finally as discussed in the last post, this is a good time to ensure the UPN and proxyAddress attributes are set correctly on your on-premise environment.

So now you can go ahead and install Azure AD Connect. As per the previous post, there are different authentication methods to choose from and these are available as install options in the Azure AD Connect installation wizard:

  • Password Hash Synchronization (PHS) – this can be run as express installation and assumes the following:
    • You have a single Active Directory forest on-premises.
    • You have an enterprise administrator account you can use for the installation.
    • You have less than 100,000 objects in your on-premises Active Directory.

With an Express installation, you get:

  • Password hash synchronization from on-premises to Azure AD for single sign-on.
  • A configuration that synchronizes users, groups, contacts, and Windows 10 computers.
  • Synchronization of all eligible objects in all domains and all OUs. At the end of the installation, you can run the installation wizard again and choose to filter domains or OU’s.
  • Automatic upgrade is enabled to make sure you always use the latest available version.

The other option is Pass-through authentication (PTA). If you have already run an express installation, all you need to do is select the “Change user sign-in” task from the Azure AD Connect application, select next and pick PTA as the sign-in method. Once successful, this will install the PTA agent on the same server as Azure AD Connect is installed on.

What you then need to do is ensure that Pass-through authentication is enabled on your tenant in the Azure AD Connect blade in your Azure AD tenant.

NOTE if you turn this feature on, it will affect all users in your managed domain, and not just for signing on to Microsoft 365, but other services such as Azure or Dynamics that you may be using the tenant for. So you need to be very aware of the effects of making this change.

Your on-premise users and computers will now synchronize to your Microsoft 365 tenant.

Conclusion

So thats the quick tour of setting up your tenant, adding domains and confirming DNS settings, and installing and configuring Azure AD Connect.

In the next post, we’ll look at setting up the Hybrid Configuration to enable your on-premise Exchange environment to co-exist with your Microsoft 365 tenant during the migration process. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 74: Preparing your Active Directory to Sync with Azure AD Connect

Its Day 74 of my 100 Days of Cloud journey, and today I’m jumping back into the Microsoft 365 ecosystem and taking a look at the steps needed to prepare your on-premise Active Directory environment.

We touched on Microsoft 365 briefly on Day 72 when we looked at whether migrating your on-premise File Server to Azure Files or SharePoint was the best option for your business. We also commented that a migration to Microsoft 365 hosted email was traditionally the first step that the majority of companies have taken or will take in their journey to Public Cloud environemnts.

I’ve decided to step back and look at Microsoft 365 as a whole and the services offered in the next few posts where we can see how each service can provide a benefit to your business. But before we do that, lets take a look at the preparation needed to decide on which identity models to use, and the preparation needed on your on-premise Active Directory environment if using the Hybrid identity model.

Authentication Methods

But before that happens or you decide on any migration strategy, you need to decide how your users will authenticate to those Cloud Services. For this we have 2 options

  1. Cloud-only identity: A cloud-only identity uses user accounts that exist only in Azure AD. Cloud-only identity is typically used by small organizations that do not have on-premises servers or do not use AD DS to manage local identities. All management of these identities is performed using the Microsoft 365 admin center and Windows PowerShell with the Microsoft Azure Active Directory Module.
  2. Hybrid identity: Hybrid identity uses accounts that originate in an on-premises AD DS and have a copy in the Azure AD tenant of a Microsoft 365 subscription. Most changes, with the exception of specific account attributes, only flow one way. Changes that you make to AD DS user accounts are synchronized to their copy in Azure AD. Azure AD Connect runs on an on-premises server, provides ongoing account synchronization, checks for changes in the AD DS, and forwards those changes to Azure AD

So that all makes sense! Now lets introduce another layer of complexity and choice. If you choose a Cloud-only identity model, things are straightforward. However, if you choose a Hybrid model, you have 2 authentication options:

  1. Managed Authentication: this is where Azure AD handles the authentication process. And nested within this, you have 2 options:
    • Password hash synchronization (PHS): this is where Azure AD performs the authentication using a hash of the password that has been syncronized from your on-premise Active Directory.
Image Credit: Microsoft
  • Pass-through authentication (PTA): this is where Azure AD redirects the authentication request back to your on-premise Active Directory.
Image Credit: Microsoft

2. Federated authentication: this is is primarily for large enterprise organizations with more complex authentication requirements. AD DS identities are synchronized with Microsoft 365 and users accounts are managed on-premises. With federated authentication, users have the same password on-premises and in the cloud and they do not have to sign in again to use Microsoft 365.

Managing and Protecting Privileged Accounts and Administrator Roles

The general rule of thumb is that we should never assign administrator roles to everyday user accounts, especially accounts that have been synchronized from on-premise.

You should assign dedicated Cloud-only identities for administrator roles, and protects these accounts with Multi-Factor Authentication, and/or Azure AD Priveleged Identity Management for on-demand, just-in-time assignment of adminstrator roles. You should also consider using a privileged access workstation (PAW). A PAW is a dedicated computer that is only used for sensitive configuration tasks, such as Microsoft 365 configuration that requires a privileged account.

Managing and Protecting User Accounts

While administrator accounts are the first ones to get protected, sometimes we forget about protecting our user accounts. While we have recommended MFA for administrator accounts, we need to be enabling and enforcing MFA for all users.

We can also enabled other advanced features (depending on our license levels):

  1. Security Defaults: this feature requires all of your users to use MFA with the Microsoft Authenticator app. Users have 14 days to register for MFA with the Microsoft Authenticator app from their smart phones, which begins from the first time they sign in after security defaults has been enabled. After 14 days have passed, the user won’t be able to sign in until MFA registration is completed.
  2. Azure AD Password Protection: detects and blocks known weak passwords and their variants and can also block additional weak terms that are specific to your organization. Default global banned password lists are automatically applied to all users in an Azure AD tenant. You can define additional entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.
  3. Conditional Access policies: a set of rules that specify the conditions under which sign-ins are evaluated and access is granted. We looked at this in detail on Day 57.

Keep the following in mind:

  • You cannot enable security defaults if you have any Conditional Access policies enabled.
  • You cannot enable any Conditional Access policies if you have security defaults enabled.

Active Directory Domain Services Preparation

Before you synchronize your AD DS to your Azure AD tenant, you need to clean up your AD DS. This is an important step as if its not performed correctly, it can lead to a significant negative impact on the deployment process. It might take days, or even weeks, to go through the cycle of directory synchronization, identifying errors, and re-synchronization.

While there are a number of attributes you need to prepare for synchronization, the most important ones are:

  1. userPrincipalName (UPN): this needs to be a valid and unique value for each user object, as the AD DS UPN matches the Azure AD UPN. This is what users will use to authenticate, and is required to be in the Internet-style sign-in format, for example “firstname.lastname@yourdomain.com”.
    • A note on this – Active Directory use the sAMAccountName attribute to authenticate. It recommended that prior to syncing your identities, this should match the userPrincipalName to avoid confustion for both users and administrators, however this is not necessary.
    • Another note – if you are using multiple mail domains, you can add multiple UPN suffixes. The article here shows how to add these and also how to change the UPN for each or multiple users.
  2. mail: This is the users Primary email address, and needs to be unique for each user. This can only contain a single value.
  3. proxyAddress: This is the users email addresses, again it needs to be unique for each user object. It cannot contain any spaces or invalid characters. This can have multiple entries if you have multiple mail domains in use.
    • A note here – The Primary address will be same as the mail attribute, and will be in the format “SMTP:name@domain1.com”. Additional addresses will be in the format “smtp:name@domain2.com” (note the uppercase and lowercase).
  4. displayName: This is how your name will be displayed in the Global Address list, and is a combination of the givenName and surname attributes. Its not necessary to be unique, however it is recommended to avoid confusion.

For optimal use of the Global Address List, its recommended that these attributes are populated and correct for each account:

  • givenName
  • surname
  • displayName
  • Job Title
  • Department
  • Office
  • Office Phone
  • Mobile Phone
  • Fax Number
  • Street Address
  • City
  • State or Province
  • Zip or Postal Code
  • Country or Region

Conclusion

In this post, we looked at the steps required to prepare for synchronization for choosing an identity model, administrator and user security, and finally user account and attribute preparation.

In the next post, we’ll look at the steps to install Azure AD Connect to synchroniuze your identities. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 73: The Value of User Groups

Its Day 73 of my 100 Days of Cloud journey, and today its a quick post about the importance of attending and being a member of Azure and Cloud User Groups.

User Groups are a great way meet new people and network in the community, but also to learn new skills from guest speakers who are experts.

Over the last few weeks, I’ve attended some excellent User Group sessions with some awesome people in the Cloud Community, such as:

All of these User Groups and many more can be found on meetup.com, and you can also follow all of the speakers above on both Twitter (links above) or search for them on LinkedIn. Also, most of the sessions from these User Groups are available on their YouTube Channels a few days after the events.

So log on to meetup and search for a User Group or Community near you, or you can attend these awesome ones above while they are still hosted as online events!

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 72: Migrate On-Premise File Server to Azure Files or SharePoint?

Its Day 72 of my 100 Days of Cloud journey, and todays post attempts to answer a question that is now at the forefront of the majority of IT Departments across the world – we know how to migrate the rest of our Infrastructure and Applications to the Cloud, but whats the best solution for the File Server?

Traditional Cloud Migration Steps

The first step that most companies make into the Cloud is the Migration to Microsoft 365 from On-Premise Exchange, because the offer of hosted email is appealing due to how critical email communication is to businesses. However although there are numerous services available in the Microsoft 365 stack (which I’ll get into more detail on in a future post), most companies will only use Email and Teams following the migration.

Once Exchange is migrated, that leaves the rest of the infrastructure. We looked at Azure Migrate back on Day 18 and how it can assist with discovery, assessment and migration of on-premise workloads to Azure. Companies will make the decision to migrate their workloads from on-premise infrastructure to Azure IAAS or PAAS services based on the following factors:

  • Legacy or Unsupported Hardware that is expensive to replace.
  • Legacy or Unsupported Virtualization systems (older versions of VMware or Hyper-V).
  • Savings on Data Centre or Comms Room overheads such as power and cooling.
  • The ability to re-architect business applications at speed and scale without the need for additional hardware/software and complicated backup/recovery procedures in the failure.
  • Backup and Disaster Recovery costs to meet Compliance and Regulatory requirements.

Once thats done, the celebrations can begin. Its done! We’ve migrated to the Cloud! Party time! But wait, whats that sitting over in rack in the corner. Covered in dust, humming and drawing power as if to mock. You approach and see the lights flicker as the disks spin in protest at the read/write operations as they struggle to perform the IOPS required by that bloody Accounts spreadsheet ….

Yes, the File Server. Except its a long time since it was a simple file server. These days, File Servers encompass managing storage at an enterprise level with storage arrays, disk tiers and caching, redundancy and backup, not to mention the cost of the file server operating system upkeep and maintenance.

So we need to migrate the File Server as well, but what are our options?

SharePoint

SharePoint empowers your Departments and Project Teams with dynamic and productive team sites from which you can access and share files, data, news, and resources. Collaborate effortlessly and securely with team members inside and outside your organization, across PCs, Macs, and mobile devices.

All organizations with an Office365 subscription will have 1TB of storage available for use in SharePoint. Any additional storage is based on the amount of licensed users you have, and each user adds an additional 10GB of Storage yo that SharePoint storage pool. So for example, if you have 50 users, you would then have a total of 1.5TB of storage.

You also have the option to add on additional storage using Office 365 Extra File Storage, however this is limited to 25TB. This is only available as an option with the following plans:

  • Office 365 Enterprise E1
  • Office 365 Enterprise E2
  • Office 365 Enterprise E3
  • Office 365 Enterprise E4
  • Office 365 Enterprise E5
  • Office 365 A3 (faculty)
  • Office 365 A5 (faculty)
  • Office for the web with SharePoint Plan 1
  • Office for the web with SharePoint Plan 2
  • SharePoint Online Plan 1
  • SharePoint Online Plan 2
  • Microsoft 365 Business Basic
  • Microsoft 365 Business Standard
  • Microsoft 365 Business Premium
  • Microsoft 365 E3
  • Microsoft 365 E5
  • Microsoft 365 F1

If you move your files into SharePoint libraries, you can then use the OneDrive Sync Client to sync both the users’ individual files in OneDrive and also be used with SharePoint Online to sync libraries that the user requires frequent access to offline.

One important thing to remember – all licensed Office365 users have 1TB of personal storage available for use, but this storage does not contribute to the overall SharePoint storage pool. You can set sharing and storage limits on both OneDrive and SharePoint using the SharePoint Admin Center.

With Microsoft 365, you have a number of options to protect the data that you place into SharePoint Online and OneDrive for Business:

  • Restrict the ability to save, download, or print files on non-corporate owned devices.
  • Restrict the ability to offline sync files on non-corporate owned devices.
  • Control what users can do based on their geographic location or device class or platform.

We can also use additional features available in Azure AD Premium, Microsoft Intune, Office 365 ATP or Azure Information Protection to provide additional protections to the data stored in SharePoint.

You can find out more about SharePoint in the Microsoft documentation here.

Azure Files

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files file shares can be mounted concurrently by cloud or on-premises deployments.

  • SMB Azure file shares are accessible from Windows, Linux, and macOS clients.
  • NFS Azure Files shares are accessible from Linux or macOS clients.

Additionally, SMB Azure file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used. Azure Files is closer to the traditional on-premise file shares in that you can use both Active Directory and Azure AD-based authentication to access you can use Group Policy to map drives as you would have done with on-premise file shares.

Azure Files is housed on Azure Storage and has 2 distinct billing options:

  • The provisioned model is only available for premium file shares, which are file shares deployed in the FileStorage storage account kind.
  • The pay-as-you-go model is only available for standard file shares, which are file shares deployed in the general purpose version 2 (GPv2) storage account kind.

Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-committing to storage utilization. When you purchase reserved capacity, your reservation must specify the following dimensions:

  • Capacity – can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity reservation.
  • Term: Reservations can be purchased for either a one year or three year term.
  • Tier: The tier of Azure Files for the capacity reservation, which can be either premium, hot, and cool tiers.
  • Location: The Azure region for the capacity reservation.
  • Redundancy: The storage redundancy for the capacity reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.

Finally, you have the option of Azure File Sync which is a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM.

You can find out more about Azure Files here, and Azure File Sync here.

Conclusion and Final Thoughts

We’ve seen both options that are available in migrating File Servers to the Microsoft Cloud ecosystem.

From the options we’ve seen and in my opinion, SharePoint is more suited to smaller businesses who are planning to or have already migrated to Microsoft 365, while Azure Files is more suited to larger enterprises with multiple sites or regions that have higher levels of storage requirements.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 71: Microsoft Sentinel

Its Day 71 of my 100 Days of Cloud journey, and todays post is all about Microsoft Sentinel. This is the new name for Azure Sentinel, following on from the rebranding of a number of Microsoft Azure services at Ignite 2021.

Image Credit: Microsoft

Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution. It provides intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.

SIEM and SOAR

We briefly touched on SIEM and SOAR in the previous post on Microsoft Defender for Cloud. Before we go further, lets note what the definition of SIEM and SOAR is according to Gartner:

  • Security information and event management (SIEM) technology supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities (such as incident management, dashboards and reporting).
  • SOAR refers to technologies that enable organizations to collect inputs monitored by the security operations team. For example, alerts from the SIEM system and other security technologies — where incident analysis and triage can be performed by leveraging a combination of human and machine power — help define, prioritize and drive standardized incident response activities. SOAR tools allow an organization to define incident analysis and response procedures in a digital workflow format.

Overview of Sentinel Functionality

Microsoft Sentinel gives a single view of your entire estate across multiple devices, users, applications and infrastructure across both on-premise and multiple cloud environments. The key features are:

  • Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
  • Detect previously undetected threats, and minimize false positives using Microsoft’s analytics and unparalleled threat intelligence.
  • Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.
  • Respond to incidents rapidly with built-in orchestration and automation of common tasks.

Sentinel can ingest alerts from not just Microsoft solutions such as Defender, Office365 and Azure AD, but from a multitude of 3rd-party and multi cloud providers such as Akamai, Amazon, Barracuda, Cisco, Fortinet, Google, Qualys and Sophos (and thats just to name a few – you can find a full list here). These are whats known as Data Sources and the data is ingested using the wide range of built-in connectors that are available:

Image Credit: Microsoft

Once your data sources are connected, the data is monitored using Sentinel integration with Azure Monitor Workbooks, which allows you to visualize your data:

Image Credit: Microsoft

Once the data and workbooks are in place, Sentinel uses analytics and machine learning rules to map your network behaviour and to combine multiple related alerts into incidents which you can view as a group to investigate and resolve possible threats. The benefit here is that Sentinel lowers the noise that is created by multiple alerts and reduces the number of alerts that you need to react to:

Image Credit: Microsoft

Sentinel’s autotmation and orchestration playbooks are built on Azure Logic Apps, and there is growing gallery of built-in playbooks to choose from. These are based on standard and repeatable events, and in the same way as standard Logic Apps are triggered by a particular action or event:

Image Credit: Microsoft

Last but not least, Sentinel has investigation tools that go deep to find the root cause and scope of a potential security threat, and hunting tools based on the MITRE Framework which enable you to hunt for threats across your organization’s data sources before an event is triggered.

Do I need both Defender for Cloud and Sentinel?

My advice on this is yes – because they are 2 different products that integrate and complement each other

Sentinel has the ability to detect, investigate and remediate threats. In order for Sentinel to do this, it needs a stream of data from Defender for Cloud or other 3rd party solutions.

Conclusion

We’ve seen how powerful Microsoft Sentinel can be as a tool to protect your entire infrastructure across multiple providers and platforms. You can find more in-depth details on Microsoft Sentinel here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 70: Microsoft Defender for Cloud

Its Day 70 of my 100 Days of Cloud journey, and todays post is all about Azure Security Center! There’s one problem though, its not called that anymore ….

At Ignite 2021 Fall edition, Microsoft announced that the Azure Security Center and Azure Defender products were being rebranded and merged into Microsoft Defender for Cloud.

Overview

Defender for Cloud is a cloud-based tool for managing the security of your multi-vendor cloud and on-premises infrastructure. With Defender for Cloud, you can:

  • Assess: Understand your current security posture using Secure score which tells you your current security situation: the higher the score, the lower the identified risk level.
  • Secure: Harden all connected resources and services using either detailed remediation steps or an automated “Fix” button.
  • Defend: Detect and resolve threats to those resources and services, which can be sent as email alerts or streamed to SIEM (Security, Information and Event Management), SOAR (Security Orchestration, Automation, and Response) or IT Service Management solutions as required.
Image Credit: Microsoft

Pillars

Microsoft Defender for Cloud’s features cover the two broad pillars of cloud security:

  • Cloud security posture management

CSPM provides visibility to help you understand your current security situation, and hardening guidance to help improve your security.

Central to this is Secure Score, which continuously assesses your subscriptions and resources for security issues. It then presents the findings into a single score and provides recommended actions for improvement.

The guidance in Secure Score is provided by the Azure Security Benchmark, and you can also add other standards such as CIS, NIST or custom organization-specific requirements.

  • Cloud workload protection

Defender for Cloud offers security alerts that are powered by Microsoft Threat Intelligence. It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions.

The Defender plans page of Microsoft Defender for Cloud offers the following plans for comprehensive defenses for the compute, data, and service layers of your environment:

Microsoft Defender for servers

Microsoft Defender for Storage

Microsoft Defender for SQL

Microsoft Defender for Containers

Microsoft Defender for App Service

Microsoft Defender for Key Vault

Microsoft Defender for Resource Manager

Microsoft Defender for DNS

Microsoft Defender for open-source relational databases

Microsoft Defender for Azure Cosmos DB (Preview)

Azure, Hybrid and Multi-Cloud Protection

Defender for Cloud is an Azure-native service, so many Azure services are monitored and protected without the need for agent deployment. If agent deployment is needed, Defender for Cloud can deploy Log Analytics agent to gather data. Azure-native protections include:

  • Azure PAAS: Detect threats targeting Azure services including Azure App Service, Azure SQL, Azure Storage Account, and more data services.
  • Azure Data Services: automatically classify your data in Azure SQL, and get assessments for potential vulnerabilities across Azure SQL and Storage services.
  • Networks: reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access.

For hybrid environments and to protect your on-premise machines, these devices are registered with Azure Arc (which we touched on back on Day 44) and use Defender for Cloud’s advanced security features.

For other cloud providers such as AWS and GCP:

  • Defender for Cloud CSPM features assesses resources according to AWS or GCP’s according to their specific security requirements, and these are reflected in your secure score recommendations.
  • Microsoft Defender for servers brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint amongst other features.
  • Microsoft Defender for Containers brings threat detection and advanced defenses to your Amazon EKS and Google’s Kubernetes Engine (GKE) clusters.

We can see in the screenshot below how the Defender for Cloud overview page in the Azure Portal gives a full view of resources across Azure and multi cloud sunscriptions, including combined Secure score, Workload protections, Regulatory compliance, Firewall manager and Inventory.

Image Credit: Microsoft

Conclusion

You can find more in-depth details on how Microsoft Defender for Cloud can protect your Azure, Hybrid and Multi-Cloud Workloads here.

Hope you enjoyed this post, until next time!