100 Days of Cloud – Day 34: Infrastructure as Code with Terraform

Its Day 34 of on 100 Days of Cloud, and in todays post I’m starting my learning journey on Infrastructure as Code (IaC).

Infrastructure as Code is one of the phrases we’ve heard a lot about in the last few years as the Public Cloud has exploded. In one of my previous posts on AWS, I gave a brief description of AWS CloudFormation, which is the built-in AWS Tool that was decribed as:

  • Infrastructure as Code tool, which uses JSON or YAML based documents called CloudFormation templates. CloudFormation supports many different AWS resources from storage, databases, analytics, machine learning, and more

I’ll go back to cover AWS CloudFormation at a later date when I get more in-depth into AWS. For today and the next few days, I’m heading back across into Azure to see how we can use HashiCorp Terraform to deploy and manage infrastructure in Azure.

In previous posts on Azure, we looked at the 3 different ways to deploy Infrastructure in Azure:

Over the coming days, we’ll look at deploying, changing and destroying existing infrastructure in Azure using Infrastructure as Code using Terraform.

Before we move on….

Now before we go any further and get into the weeds of Terraform and how it works, I want to allay some fears.

When people see the word “Code” in a service description, the automatic assumption is that you need to be a developer to understand and be competent in using this method of deploying infrastructure. As anyone who knows me and those of you who have read my bio know, I’m not a developer and don’t have a development background.

And I don’t need to be in order to use tools like Terraform and CloudFormation. There are loads of useful articles and training courses out there which walks you through using these tools and understanding them. The best place to start is the official HashiCorp Learn site, which gives learning patch for all the major Cloud providers (AWS/Azure/GCP) and also for Docker, Oracle and Terraform Cloud. If you search for HashiCorp Ambassadors such as Michael Levan and Luke Orrelana, they have Video Content on YouTube, CloudAcademy and Cloudskills.io which walks you through the basics of Terraform.

Fundamentals of Terraform

Terraform was originally programmed using JSON, but then switched to use HCL, which stands for HashiCorp Configuration Language. Its very similar to JSON, but has additional capabilities built in. While JSON and YAML are more suited for Data Structures, HCL used syntax that is specifically designed for building structured configuration.

One of the main things we need to understand before moving forward with Terraform is what the above means.

HCL is declarative programming language – this means that we define what needs to be done and the results that we expect to see, instead of telling the program how to do it (which is imperative programming). So if we look at the example HCL config of an Azure Resource Group below, we see that we need to provide specific values:

Image Credit: HashiCorp

When Terraform is used to deploy infrastructure, it creates a “state” file that defines what has been deployed. So if you deploy with Terraform, you need to manage with Terraform also. Making changes to any infrastructure directly can cause corruption in Terraform configuration files and may lead to losing your Infrastructure.

For Azure users, the latest version of Terraform is already build into the Azure Cloud Shell. In order to get Terraform working on your machine, we need to follow these steps:

  • Go to Terraform.io and download the CLI.
  • Extract the file to a folder, and then create a System Environment Variable that points to it.
  • Open PowerShell and run terraform version to make sure it is installed.
  • Install the Hashicorp Terraform extension in VS Code

Conclusion

So thats the basics of Terraform. In the next post, we’ll be running throug the 4 steps to install Terraform on our machine, show how we get connected into Azure from VS Code and then start looking at Terraform Configuration Files and Providers. Until next time!

100 Days of Cloud – Day 26: Linux Cloud Engineer Bootcamp, Day 1

Its Day 26 of my 100 Days of Cloud Journey, and today I’m taking Day 1 of the Cloudskills.io Linux Cloud Engineer Bootcamp

This is being run over 4 Fridays by Mike Pfieffer and the folks over at Cloudskills.io, and is focused on the following topics:

  • Scripting
  • Administration
  • Networking
  • Web Hosting
  • Containers

Now I must admit, I’m late getting to this one (sorry Mike….). The bootcamp livestream started on November 12th and continued last Friday (November 19th). Quick break for Thanksgiving, then back on December 3rd and 10th. However, you can sign up for this at any time to watch the lectures to your own pace and get access to the Lab Exercises on demand at this link:

https://cloudskills.io/courses/linux

Week One focused on the steps to create an Ubuntu VM in Azure, installing a WebServer, and then scripting that installation into a file that can be stored on Blob Storage to make it reusable when deploying additional Linux VMs.

I’m not going to divulge too many details on the content, but there were some key takeaways for me.

SSH Key Pairs

When we created Windows VMs in previous posts, the only option available is to create the VM using the Username/Password for authentication.

With Linux VMs, we have a few options we can use for authentication:

  • Username/Password – we will not be allowed to use “root” as the username
  • SSH Public Key – this is the more secure method. This generates a SSH Public/Private Key Pair that can be used for authentication

Once the Key Pair is generated, you are prompted to download the Private Key as a .pem file.

The Public Key is stored in Azure, and the private key is downloaded and stored on your own machine. IN order to connect to the machine, run the following command:

ssh -i <path to the .pem file> username@<ipaddress of the VM>

Obviously from a security perspective, this takes the username/password out of the authentication process and makes the machine less vulnerable to a brute force password attack.

You can also use existing keys or upload keys to Azure for use in the Key Pair process.

Reusable Scripts

So our VM is up and running. And lets say we want to install an application on the VM. So on the Ubuntu command line, we would run:

sudo apt-get install <application-name>

That’s fine if we need to do this for a single VM but lets say we need to do it with multiple. To do this, we can create a script and place this in a Blob Storage container in the same Resource Group as our VM.

Then, next time we need to deploy a VM and have the requirement for that application, we can call the script from the “Advanced” tab of the VM Creation process and automatically install the app during the VM creation process.

Conclusion

That’s all for this post – I’ll update as I go through the remaining weeks of the Bootcamp, but to learn more and go through the full content of lectures and labs, sign up at the link above.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 25: Azure Storage

Its Day 25 of my 100 Days of Cloud journey, and todays post is about Azure Storage.

Azure Storage comes in a variety of Storage Types and Redundancy models. This post will provide a brief overview of these services, and I’ll attempt to give an understanding of what Storage Types you should be using for the different technologies in your Cloud Platforms.

Core Storage Services

Azure Storage offers five core services: Blobs, Files, Queues, Tables, and Disks. Let’s explore each and establish some common use cases.

Azure Blobs

This is Azure Storage for Binary Large Objects. This is used for unstructured data such as logs, images, audio, and video. Typical scenarios for using Azure Blob Storage is:

  • Web Hosting Data
  • Streaming Audio and Video
  • Log Files
  • Backup Data
  • Data Analytics

You can access Azure Blob Storage using tools like Azure Import/Export, PowerShell Modules, Application connections, and the Azure Storage Rest API.

Blob Storage can also be directly accessed over HTTPS using https://<storage-account>.blob.core.windows.net

Azure Files

Azure Files is effectively a traditional SMB File Server in the Cloud. Think of it like a Network Attached Storage (NAS) Device hosted in the Cloud that is highly available from anywhere in the world.

Shares from Azure Files can be mounted directly on any Windows, Linux or macOS device using the SMB protocol. You can also cache an Azure File Share locally on a Windows Server IN your environment – only locally access files are cached, and these are then written back to the parent Azure Files Share in the Cloud.

Azure Files can also be directly accessed over HTTPS using https://<storage-account>.file.core.windows.net

Azure Queues

Azure Queues are used for asynchronous messaging between application components, which is especially useful when decoupling those components (ex. microservices) while retaining communication between them. Another benefit is that these messages are easily accessible via HTTP and HTTPS.

Azure Tables

Azure Table Storage stores non-relational structured data (NoSQL) in a schema-less way, because there is no schema, its easy to adapt your data structure as your applications requirements change.

Azure Disks

Alongside Blobs, Azure Disks is the most common Azure Storage Type. If you are using Virtual Machines, then you are most likely using Azure Disks.

Storage Tiers

Azure offers 3 Storage Tiers that you can apply to your Storage Accounts. This allows you to store data in Azure in the most Cost Effective manner:

  • Hot tier – An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs.
  • Cool tier – An online tier optimized for storing data that is infrequently accessed or modified. Data in the Cool tier should be stored for a minimum of 30 days. The Cool tier has lower storage costs and higher access costs compared to the Hot tier.
  • Archive tier – An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the Archive tier should be stored for a minimum of 180 days.

Storage Account Types

For all the above Core Storage Services that I’ve listed, you will more than likely use a Standard General-Purpose v2 Storage Account as this is the standard storage account type for all of the above scenarios. There are however 2 others you need to be aware of:

  • Premium block blobs – used in scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency
  • Premium file shares – Used in Azure Files where you require both SMB and NFS capabilities. The default Storage Account only provides SMB.

Storage Redundancy

Finally, lets look at Storage Redundancy. There are 4 options:

  • LRS (Locally-redundant Storage) – this creates 3 copies of your data in a single Data Centre location. This is the least expensive option but is not recommended for high availability. LRS is supported for all Storage Account Types.

Credit – Microsoft

  • ZRS (Zone-redundant Storage) – this creates 3 copies of your data across 3 different Availability Zones (or physically distanced locations) in your region. ZRS is supported for all Storage Account Types.

Credit – Microsoft

  • GRS (Geo-redundant Storage) – this creates 3 copies of your data in a single Data Centre in your region (using LRS). It then replicates the data to a secondary location in a different region and performs another LRS copy in that location. GRS is supported for Standard General-Purpose v2 Storage Accounts only.

Credit – Microsoft

  • GZRS (Geo-zone-redundant Storage) – this creates 3 copies of your data across 3 different Availability Zones (or physically distanced locations) in your region. It then replicates the data to a secondary location in a different region and performs another LRS copy in that location. GZRS is supported for Standard General-Purpose v2 Storage Accounts only.

Credit – Microsoft

Conclusion

As you can see, Storage Accounts are not just about storing files – there are several considerations you need to make around cost, tiers and redundancy when planning Storage for your Azure Application or Service.

Hope you enjoyed this post, until next time!

100 Days of Cloud — Day 24: Azure Import/Export and Azure Data Box

It’s Day 24 of my 100 Days of Cloud journey, and todays post is about the Azure Import/Export and Azure Data Box solutions.

In the previous post on Azure Backup, I briefly talked about offline seeding of Azure Data where network, cost and time constraints were a factor, and how Azure Import/Export and Azure Data Box could be used. Today, I’ll take a closer look at these solutions and what the use cases and benefits are.

Azure Import/Export

Azure Import/Export service is used to import large amounts of data to Azure Blob storage or Azure Files by shipping your own disk drives to an Azure datacenter. You can also use the service to export Azure Blob storage data to disk drives and ship these to your On-Premises location.

You should use Azure Import/Export when the network bandwidth available to you is not sufficient to upload/download the data directly. You should use the service in the following scenarios:

  • Migration of data to Azure
  • Distributing content to multiple sites
  • Backup of On-Premise Data to Azure
  • Data Recovery from Azure Storage to On-Premise

The Import Workflow is as follows:

  • Determine data to be imported, the number of drives you need, and the destination blob location for your data in Azure storage.
  • Use the WAImportExport tool to copy data to disk drives. Encrypt the disk drives with BitLocker.
  • Create an import job in your target storage account in Azure portal. Upload the drive journal files.
  • Provide the return address and carrier account number for shipping the drives back to you.
  • Ship the disk drives to the shipping address provided during job creation.
  • Update the delivery tracking number in the import job details and submit the import job.
  • The drives are received at the Azure data center and the data is copied to your destination blob location.
  • The drives are shipped using your carrier account to the return address provided in the import job.
Credit: Microsoft

The Export workflow works in a similar way:

  • Determine the data to be exported, number of drives you need, source blobs or container paths of your data in Blob storage.
  • Create an export job in your source storage account in Azure portal.
  • Specify source blobs or container paths for the data to be exported.
  • Provide the return address and carrier account number for shipping the drives back to you.
  • Ship the disk drives to the shipping address provided during job creation.
  • Update the delivery tracking number in the export job details and submit the export job.
  • The drives are received and processed at the Azure data center.
  • The drives are encrypted with BitLocker and the keys are available via the Azure portal.
  • The drives are shipped using your carrier account to the return address provided in the import job.
Credit: Microsoft

A full description of the Azure Import/Export Service can be found here.

Azure Data Box

Azure Data Box is similar to Azure Import/Export, however the key difference is that Microsoft will send you a proprietary storage device. These come in 3 sizes:

  • Data Box Disk — 5 x 8TB SSD’s, so 40TB in total
  • Data Box — 100TB NAS Device
  • Data Box Heavy — Up to 1PB (Petabyte) of Data

Once these devices are used, they are then sent back to Microsoft and imported into your target.

You can use Azure Data Box for the following import scenarios:

  • Migration of Data to Azure
  • Initial Bulk Transfer (for example backup data)
  • Periodic Data Upload (where a large amount of data is periodically generated On-Premise

You can use Azure Data Box for the following export scenarios:

  • Taking a copy of Azure Data back to On-Premise
  • Security Requirements, for example if the data cannot be held in Azure due to legal requirements
  • Migration from Azure to On-Premise or another Cloud provider

The Import flow works as follows:

  • Create an order in the Azure portal, provide shipping information, and the destination Azure storage account for your data. If the device is available, Azure prepares and ships the device with a shipment tracking ID.
  • Once the device is delivered, power on and connect to the device. Configure the device network and mount shares on the host computer from where you want to copy the data.
  • Copy data to Data Box shares.
  • Return the device back to the Azure Datacenter.
  • Data is automatically copied from the device to Azure. The device disks are then securely erased as per NIST guidelines.

The Export flow is similar to the above, and works as follows:

  • Create an export order in the Azure portal, provide shipping information, and the source Azure storage account for your data. If the device is available, Azure prepares a device. Data is copied from your Azure Storage account to the Data Box. Once the data copy is complete, Microsoft ships the device with a shipment tracking ID.
  • Once the device is delivered, power on and connect to the device. Configure the device network and mount shares on the host computer to which you want to copy the data.
  • Copy data from Data Box shares to the on-premises data servers.
  • Ship the device back to the Azure datacenter.
  • The device disks are securely erased as per NIST guidelines.

A full description of Azure Data Box can be found here.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 23: Azure Backup

It’s Day 23 of my 100 Days of Cloud journey, and todays post is about Azure Backup.

In previous posts, I talked about Azure Migrate and how it can help your cloud migration journey with assessments and cost calculators, and also about how Azure Site Recovery can act as a full Business Continuity and Disaster recovery solution.

Azure Backup provides a secure and cost effective solution to backup and recover both On-Premise and Azure cloud-based Virtual Machines, Files, Folders, Databases and even Azure Blobs and Managed Disks.

Azure Backup — Vaults

Azure Backups are stored in Vaults. There are 2 types of Vault:

  • Recovery Services Vault
  • Backup Vault

If we go to the Azure Portal and browse to the Backup Center, we can see “Vaults” listed in the menu:

When we click on the button “+ Vault”, this gives us a screen which shows the differences between a Recovery Service vault and a Backup vault. It’s useful to make a note of these as it will help with planning your Backup Strategy.

So what is a vault? In simple terms, it’s an Azure Storage entity that’s used to hold data. And in much the same way as other Azure Storage Services, a vault has the following features:

  • You can monitor backed up items in the Vault
  • You can manage Vault access with Azure RBAC

– You can specify how the vault is replicated for redundancy. By default, Recovery Services vaults use Geo-redundant storage (GRS), however you can select Locally redundant storage (LRS) or Zone-redundant Storage (ZRS) depending on your requirements.

Azure Backup — Supported Scenarios

There are a number of scenarios you can use Azure Backup with:

  • You can back up on-premises Windows machines directly to Azure by installing the Azure Backup Microsoft Azure Recovery Services (MARS) agent on each machine (Physical or Virtual). Linux machines aren’t supported.
  • You can back up on-premises machines to a backup server — either System Center Data Protection Manager (DPM) or Microsoft Azure Backup Server (MABS). You can then back up the backup server to a Recovery Services vault in Azure. This is useful in scenarios where you need to keep longer term backups for multiple months/years in line with your organization’s data retention requirements.
  • You can back up Azure VMs directly. Azure Backup installs a backup extension to the Azure VM agent that’s running on the VM. This extension backs up the entire VM.
  • You can back up specific files and folders on the Azure VM by running the MARS agent.
  • You can back up Azure VMs to the MABS that’s running in Azure, and you can then back up the MABS to a Recovery Services vault.

The diagram below shows a high level overview of Azure Backup:

Credit: Microsoft Docs

Azure Backup — Policy

Like the majority of backup systems, Azure Backup relies on Policies. There are a few important points that you need to remember when using Backup Policies:

  • A backup policy is created per vault.
  • A policy consists of 2 components, Schedule and Retention.
  • Schedule is when to take a backup, and can be defined as daily or weekly.
  • Retention is how long to keep a backup, and this can be defined as daily, weekly, monthly or yearly.
  • Monthly and yearly retention is referred to as Long Term Retention (LTR)
  • If you change the retention period of an existing backup policy, the new retention will be applied to all of the older recovery points also.

Azure Backup — Offline Backup

The options we’ve discussed so far are applicable to online backup using either the MARS Backup Agent or local Agents using Azure Backup Server or DPM. These options are only really useful if you have a reasonably small amount of data to back up, and also have the bandwidth to transfer these backups to Azure.

However in some cases, you may have terabytes of data to transfer and it would not be possible from a network bandwidth perspective to do this. This is where Offline backup can help. This is offered in 2 modes:

  • Azure Data Box — this is where Microsoft sends you a proprietary, tamper resistant Data Box where you can seed your data and send this back to Microsoft for upload in the Data Center to your Azure Subscription.
  • Azure Import/Export Service — This is where you provision temporary storage known as the staging location and use prebuilt utilities to format and copy the backup data onto customer-owned disks.

Azure Backup — Benefits

Azure Backup delivers these key benefits:

  • Offload of On-Premise backup workloads and long term data retention to Azure Storage
  • Scale easily using Azure Storage
  • Unlimited Data Transfer, and no data charges for inbound/outbound data
  • Centralized Monitoring and Management
  • Short and Long-term data retention
  • Multiple redundancy options using LRS, GRS or ZRS
  • App Consistent backups
  • Offline seeding for larger amounts of data

Conclusion

Azure Backup offers a secure and cost-effective way to back up both On-Premise and Cloud resources. As usual, the full overview of the Azure Backup offering can be found on Microsoft Docs.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 21: Azure Mask

It’s Day 21 of my 100 Days of Cloud journey!

Today’s post is another short one, and is a quick overview of how Azure Mask can help you exclude any sensitive content or subscription information when creating content and using screenshots or video footage from the Azure Portal.

Azure Mask was created by Pluralsight author and Developer Advocate Brian Clark, and is a browser extension that can be toggled on/off when creating content from the Azure Portal. It’s currently supported on Edge or Chrome browsers.

What it does is blur/hide any personal information such as email address, subscription information once the toggle is set to on. This is great for content creation as it means any screenshots don’t need to be edited or blurred out afterwards.

You can find the source code and instructions for enabling Azure Mask in your browser here on Brian’s Github Page:

https://github.com/clarkio/azure-mask

Hope you enjoyed this post,until next time!!

100 Days of Cloud — Day 19: Azure Site Recovery

Its Day 19 of my 100 Days of Cloud journey, and todays post is about Azure Site Recovery.

Credit — Microsoft IT Ops Talk Community/Sarah Lean

We saw in the previous post how easy it is to migrate infrastructure to the cloud from on-premise Virtual and Physical environments using the steps and features that Azure Migrate can offer.

Azure Site Recovery

Azure Site Recovery is another “replication” offering in the Azure Service portfolio that provides replication of machines from any of these primary locations to Azure:

  • On-Premise Physical
  • On-Premise Virtual (Hyper-V or VMware)
  • Azure VMs from a different region
  • AWS Windows instances

However, while Azure Migrate is a “cloud migration” service offering, Azure Site Recovery is a Business Continuity/Disaster Recovery offering.

How it works?

The steps for setting up Azure Site Recovery are mostly similar for all scenarios. You need to do the following steps:

  • Create an Azure Storage account, which will store images of the replicated VMs.
  • Create a Recovery Services Vault, which will store metadata for VM and Replication configurations.
  • Create an Azure Network, which VMs will use when replicated to Azure.
  • Ensure that your on-premise workloads have access to replicate to Azure and any ports are open on your firewall.
  • Install the Azure Site Recovery Provider on any source VMs you wish to replicate to Azure.
  • Create a Replication Policy, which includes replication frequency and recovery point retention.
  • Once Replication is running, run a Test Failover using an isolated network to ensure your replicated VMs are in a consistent state.
  • Once the Test Failover is completed, run a full failover to Azure. This will make Azure your Primary Site, and will replicate any changes back to your on-premise VMs. Once this is completed, you can fail back to make the on-premise VMs your Primary Site again, and the data will be consistent! Pretty Cool!!

Service Features

Azure Site Recovery includes the following features to help ensure your workloads keep running in the event of outages:

  • Replication from On-Premise-Azure, Azure-Azure, AWS to Azure.
  • Workload Replication from supported Azure, AWS or On-Premise VM or Physical Server.
  • RPO and RTO targets in line with your business and audit requirements.
  • Flexible failover and Non-Disruptive testing.

Conclusion

Azure Site Recovery can play a key role in the Business Continuity and Disaster Recovery strategy for your business. A full overview of Azure Site Recovery can be found here, and for a full demo of the service, contact your local Microsoft Partner.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 18: Azure Migrate

Its Day 18 of my 100 Days of Cloud journey. Yesterday, I attended an Azure Immersion Workshop on Azure Migrate hosted by Insight UK.

Azure Immersion Workshops are a great way to get hand-on access to Azure Technologies in a Sandbox environment with full instructor support. You require a business email address, or an address linked to an active Azure Subscription in order to avail of the full lab experience.

If you simply type “Azure Immersion Workshop” into a Google Search, you will find a list of Microsoft Partners that are running Azure Immersion Workshops in the coming months. This is a full day course, but is well worthwhile if you don’t have or don’t want to use your own on-premise resources to learn the technology.

Azure Migrate Overview

Azure Migrate is an Azure technology which automates planning and migration of your on-premise servers from Hyper-V, VMware or Physical Server environments.

Azure Migrate is broken into the following sections:

  • Discover — this uses a lightweight VM appliance that can be run on a VM or a Physical server in your on-premise infrastructure. This appliance runs the discovery of VMs and Physical Servers in your environment. Discovery is agentless, so nothing is installed on servers in your environment.
  • Assessment — once the discovery is completed, you can then run an assessment based on this. The assessment will make recommendations for the target Azure VM size based on what was discovered. This useful to know if you have over/under provisioned resources in your environment, as the assessment will size them correctly based on demand and workloads. Because of this, it is better to run the discovery at normal business hours to get a full overview of your environment.
  • Migrate — this is the point where you can discover the VMs you want to migrate. The first step is to replicate them to Azure on a Test Migration to ensure everything is working as expected. Azure will also flag any issues that have been detected on VMs so that you can remediate. Once this is completed and you are happy that everything is in place, you can run a full VM Migration.
  • Containerize — You can also use Azure Migrate to containerize Java web apps and ASP.NET apps that are running on premise and migrate these to either Azure Kubernetes Service (AKS) or Azure App Services.

Azure Migrate also integrates with a number of ISV’s (Independent Software Vendors) such as Carbonite, Lakeside, UnifyCloud and Zerto to offer additional support for assessment and migration of servers.

There are 2 great benefits to using Azure Migrate.

  • Firstly, the first 30 days of Discover is free so you have time to plan multiple different scenarios in your migration journey.
  • Secondly, this also integrates with the TCO (Total Cost of Ownership) Calculator to give a full cost breakdown of what hosting in Azure will cost to your organization.

The full description of Azure Migrate and all of its offerings and services can be found here at Microsoft Docs. And as I said above, the best way to get the full experience is to find a local partner that’s running an Azure Immersion Workshop in your area of time zone.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 16: Azure Firewall

Its Day 16 of 100 Days of Cloud and todays post is about Azure Firewall.

Firewall …. we’ve covered this before haven’t we? Well, yes in a way. In a previous post, I talked about Network Security Groups and how they can be used to filter traffic in and out of a Subnet or a Network Interface in a Virtual Network.

Azure Firewall v NSG

Azure Firewall is a Microsoft-managed Network Virtual Appliance (NVA). This appliance allows you to centrally create, enforce and monitor network security policies across Azure subscriptions and virtual networks (vNets). An NSG is a layer 3–4 Azure service to control network traffic to and from a vNet.

Unlike Azure Firewall, an NSG can only be associated with subnets or network interfaces within the same subscription of Azure VMs. Azure Firewall can control a much broader range of network traffic. It can filter and analyze L3-L4 traffic, as well as L7 application traffic.

Azure Firewall sits at the subscription level and manages traffic going in and out of the vNet. The NSG is then deployed at the subnet level and network interface. The NSG then manages traffic between subnets and virtual machines.

Azure Firewall Features

Azure Firewall includes the following features:

  • Built-in high availability — so no more need for load balancers.
  • Availability Zones — Azure Firewall can span availability zones for greater availability.
  • Unrestricted cloud scalability — Azure Firewall can scale to accommodate changing traffic flows.
  • Application FQDN filtering rules — You can limit outbound HTTP/S traffic or Azure SQL traffic to a specified list of fully qualified domain names (FQDN) including wild cards
  • Network traffic filtering rules — Allow/Deny Rules
  • FQDN tags — makes it easy for you to allow well-known Azure service network traffic through your firewall.
  • Service tags — groups of IP Addresses.
  • Threat intelligence — can identify malicious IP Addresses or Domains.
  • Outbound SNAT support — All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP.
  • Inbound DNAT support — Inbound Internet network traffic to your firewall public IP address is translated (Destination Network Address Translation) and filtered to the private IP addresses on your virtual networks.
  • Multiple public IP addresses — You can associate up to 250 Public IPs with your Azure Firewall.
  • Azure Monitor logging — All events are integrated with Azure Monitor.
  • Forced tunneling — route all Internet traffic to a designated next hop.
  • Web categories — lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others.
  • Certifications — PCI/SOC/ISO Compliant.

Azure Firewall and NSG in Conjuction
NSGs and Azure Firewall work very well together and are not mutually exclusive or redundant. You typically want to use NSGs when you are protecting network traffic in or out of a subnet. An example would be a subnet that contains VMs that require RDP access (TCP over 3389) from a Jumpbox. Azure Firewall is the solution for filtering traffic to a VNet from the outside. For this reason, it should be deployed in it’s own VNet and isolated from other resources. Azure Firewall is a highly available solution that automatically scales based on its workload. Therefore, it should be in a /26 size subnet to ensure there’s space for additional VMs that are created when it’s scaled out.

A scenario to use both would be a Hub-spoke VNet environment with incoming traffic from the outside. Consider the following diagram:

The above model has Azure Firewall in the Hub VNet which has peered connections to two Spoke VNets. The Spoke Vnets are not directly connected, but their subnets contain a User Defined Route (UDR) that points to the Azure Firewall, which serves as a gateway device. Also, Azure Firewall is public facing and is responsible for protecting inbound and outbound traffic to the VNet. This is where features like Application rules, SNAT and DNaT come in handy.

Conclusion

If you have a simple environment, then NSGs should be sufficient for network protection. However for large scale Production environments, Azure Firewall provides a far greater scale of protection.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 15: Azure Key Vault

Its Day 15, and todays post is on Azure Key Vault.

Lets think about the word “vault” and what we would use a vault for. The image that springs to mind immediately for me is the vaults at Gringotts Wizarding Bank from the Harry Potter movies — deep down, difficult to access, protected by a dragon etc…

This is essentially what a vault is — a place to store items that you want to keep safe and hide from the wider world. This is no different in the world of Cloud Computing. In yesterdays post on System Managed Identities, we saw how Azure can eliminate the need for passwords embedded in code, and use identities in conjunction with Azure Active Directory Authentication

Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys.

What is Azure Key Vault?

In a typical IT environment, secrets, passwords, certificates, API-tokens and keys are used all over multiple platforms including source code, configuration files, digital formats and even on pieces of paper (sad but true ☹️).

An Azure Key Vault integrates with other Azure services and resources like SQL servers, Virtual Machines, Web Application, Storage Accounts etc. It is available on per-region basis, which means that a key vault must be deployed in the same Azure region where it is intended to be used with services and resources.

As an example, an Azure Key Vault must be available in the same region where an Azure virtual machine is deployed so that it can be used for storing Content Encryption Key (CEK) for Azure Disk Encryption.

Unlike other Azure resources, where the data is stored in general storage, an Azure Key Vault is backed by a Hardware Security Module (HSM).

How Azure Key Vault works

When using Key Vault, application developers no longer need to store security information in their application. Not having to store security information in applications eliminates the need to make this information part of the code. For example, an application may need to connect to a database. Instead of storing the connection string in the app’s code, you can store it securely in Key Vault.

Your applications can securely access the information they need by using URIs. These URIs allow the applications to retrieve specific versions of a secret. There is no need to write custom code to protect any of the secret information stored in Key Vault.

Authentication is done via Azure Active Directory. Authorization may be done via Azure role-based access control (Azure RBAC) or Key Vault access policy. Azure RBAC can be used for both management of the vaults and access data stored in a vault, while key vault access policy can only be used when attempting to access data stored in a vault.

Each Vault has a number of roles, but the most important ones are:

  • Vault Owner — this role controls who can access the vault and what permissions they have (read/create/update/delete keys)
  • Vault consumer: A vault consumer can perform actions on the assets inside the key vault when the vault owner grants the consumer access. The available actions depend on the permissions granted.
  • Managed HSM Administrators: Users who are assigned the Administrator role have complete control over a Managed HSM pool. They can create more role assignments to delegate controlled access to other users.
  • Managed HSM Crypto Officer/User: Built-in roles that are usually assigned to users or service principals that will perform cryptographic operations using keys in Managed HSM. Crypto User can create new keys, but cannot delete keys.
  • Managed HSM Crypto Service Encryption User: Built-in role that is usually assigned to a service accounts managed service identity (e.g. Storage account) for encryption of data at rest with customer managed key.

The steps to authenticate against a Key Vault are:

  1. The application which needs authentication is registered with Azure Active Directory as a Service Principal.
  2. The key Vault Owner/Administrator will then create a Key Vault and then attaches the ACLs (Access Control Lists) to the Vault so that the Application can access it.
  3. The application initiates the connection and authenticates itself against the Azure Active Directory to get the token successfully.
  4. The application then presents this token to the Key Vault to get access.
  5. The Vault validates the token and grants access to the application based on successful token verification.

Conclusion

Azure Key Vault streamlines the secret, key, and certificate management process and enables you to maintain strict control over secrets/keys that access and encrypt your data.

You can check out this Azure QuickStart Template which automatically creates a Key Vault.

Hope you enjoyed this post, until next time!!