100 Days of Cloud — Day 8: Azure Resource Locks and Azure Policy

It’s Day 8 of 100 days of Cloud, and it’s time to briefly re-visit Azure Resource Locks and talk about Azure Policy.

A quick summary of Resource Locks

During Day 3 when I was creating Resource Groups, I demonstrated Locks and how they can be used to prevent users who have been assigned the correct RBAC Role to manage the Resource Group from deleting either the Resource Group or the resources contained within the group. When a Lock exists, a failure message is generated on screen when the user tries to delete the Resource.

I won’t delve too deep into Locks, as they are a very simple and quick tool that can prevent changes to your environment in an instant. They can be applied at different tiers of your environment. Depending on your governance model, you might want to apply at the subscription, resource group or resource level, all are possible. Locks have only two basic operations:

  • CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.
  • ReadOnly means authorized users can read a resource, but they can’t delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

Only the “Owner” and “User Access Administrator” roles have the ability to apply and remove locks. It’s generally recommended that all resources in a Production environment have a “CanNotDelete” lock applied.

A quick summary of Azure Policy

Policies define what you can and cannot do with your environment. They can be used individually or in conjunction with Locks to ensure granular control. Lets take a look at some examples where Policies can be applied:

  • If you want to ensure resources are deployed only in a specific region.
  • If you want to use only specific Virtual Machine or Storage SKUs.
  • If you want to block any SQL installations.

One thing to be aware of when it comes to Policies — let’s say you have already deployed a Virtual Machine. It’s in the East US region in a resource group called “Prod_VMs”, size is Standard_D4_v3, and is running Windows Server 2019 Datacenter.

You then create a series of Azure Policies which state:

  • You can only deploy resources in the North Europe region
  • You can only deploy VMs of size Standard_D2_V2

You apply these policies to your “Prod_VMs” Resource Group. So what happens to the existing VM you have deployed in East US?

Nothing, that’s what. Azure Policy isn’t an enforcement tool in the sense that it will shut down any existing infrastructure that is not compliant with the policies. It will Audit and report it as a non-conformance, but that’s about it. However, it will prevent any further VMs being deployed that do not meet the policies that have been applied.

Let jump into the portal and take a look at how this works.

The Basics of Policies

I go into the Portal and type in “Policy” in the search bar.

The first thing I see when I go into the “Policy” windows is that I already have non-compliant resources!

This is the default Policy assigned to my subscription, so if I click into the “ASC Default” Policy name, it will show me what’s being reported

As I can see, this is the Default set of policies that are applied to the Subscription. Note that underneath the “ASC Default…” heading at the top of the page, it has the term “Initiative Compliance”. In Azure, a set of policies that are group together and applied to a single target is called an Initiative.

If I click into the first listed group “Enable threat detection for Azure resources”, it will give me an overview of the Actions required to remediate the non-compliance, a list of policies that we can apply to at different levels if required, and the overall resource compliance.

In effect, Azure Policy is constantly auditing and evaluating our environment to enforce organizational standards and assess compliance.

Applying a Policy

Firstly, I need to set up a Resource Group where I can apply the locks and policy. So following the naming scheme I’ve used as an example earlier, I run the following in PowerShell:

New-AzResourceGroup –Name Prod_VMs –Location NorthEurope

Next, I need to apply a lock at resource group level. So I run:

New-AzResourceLock — LockName Prod_VMs_Lock –LockLevel CanNotDelete –ResourceGroupName Prod_VMs

Now, back in the Portal on the Policy Homepage, I click on “Assignments”. This brings me into the list of Assigned Policies. At the top of the page, I click on “Assign policy”:

This open the “Assign Policy” window. On the “Basics” page, the first thing I need to do is click the “ellipses” on the Scope option. This will allow me to select where the Policy needs to be assigned. I select my “Prod_VMs” Resource Group and click select:

Next, I click the “ellipses” for “Policy Definition”. In this Window, I type in “locations” in the search bar. This gives me the “Allowed locations” Policy definition. I select this and move on to the “Parameters” tab:

Based on the Policy Definition I just selected, this gives me a list of parameters to choose. I can either search or hit the drop-down to select from a list. I can select as many as I want here, but I’ll just pick “North Europe” and move on to the Remediation tab:

The Remediation tab shows that the assignment will only take effect on newly-created resources, not on existing ones. To create a remediation task, I would need to have “deployIfNotExists” policies already in place that would automatically fix the non-compliance. However, note that these can be powerful and therefore quite dangerous if not set up correctly. I would also need a managed identity to do this. There is a detailed article here on Microsoft Docs that gives full details of how Remediation works. I’m going to move on here to the “Non-Compliance messages” tab:

Non-compliance messages gives me a field where I can add custom messages to say why a resource is non-complaint:

And with that, I’ll click “Review + Create” tab to review my options, and then click “Create”:

And it creates. Note that it says it will take 30 minutes to take effect:

So while I was waiting, I created 1 more Policy to enforce a specific Virtual Machine SKU Size, and applied that to my Resource Group:

So now it’s time to see if the policies have taken effect. I’ll run the following command to try and create a Virtual Machine in East US region:

New-AzVM –Name ProdVM1 –ResourceGroupName Prod_VMs –Location eastus -Verbose

And it fails, because the “Allowed Locations” policy doesn’t allow it!

OK great, so now I’ll try to create the VM in the northeurope region:

New-AzVM –Name ProdVM1 –ResourceGroupName Prod_VMs –Location northeurope -Verbose

And this also fails! Because I didn’t specify a size value, its trying to create the VM using the Standard VM Size SKU, which I have disallowed via the other policy:

OK, so now lets prove that I can deploy the VM with the correct location and VM Size SKU specified. I’ll run:

New-AzVM –Name ProdVM1 –ResourceGroupName Prod_VMs –Location northeurope –Size Standard_B2s —Verbose

And this time, its successful!

And that is a very quick overview of how Azure Policy can help with both compliance, governance and costs in your Azure Subscription.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 7: Deploy an Azure Virtual Machine using ARM Templates

Its Day 7 of my 100 Days of Cloud journey and in todays post, I’m going to do the same thing I did for the last 2 days … except again, differently.

That’s right dear reader, today is the third and final post in what I’ve decided to call the “Azure Virtual Machine Deployment Trilogy”. It has a nice ring to it, doesn’t it? I wonder who’ll play me in the lead role once the movie rights get picked up ……

Anyway, back to reality. Over the last 2 days, we’re now fully up to speed in what we’re trying to achieve here. Day 5 dealt with deploying via the Azure Portal, Day 6 dealt with PowerShell deployment and the various parameters, inputs and in some cases “gotchas” that we needed to be aware of.

Some Checks and Housekeeping

Before I go any further, if you recall I was going to check Cost Management to see if my deployments are generating any costs (Actual or Forecast) or alerts that were set up in Day 2. And yes they are:

So I’m going to delete the entire Resource Group from the default “MyPSTestVM” deployment as I don’t want to keep paying for it:

Remove-AzResourceGroup -Name MyPSTestVM

This takes a few minutes (so don’t worry if the PowerShell prompt just sits there. Don’t forget, deleting the resource group deletes all of the Resources it contains. After a few minutes, I’ll check the Activity Log in the Portal and it confirms that the RG and all resources have been deleted:

ARM Templates

Today, we’re moving on to Azure Resource Manager (or ARM) Templates. In Day 1, I prepared the environment by installing Visual Studio Code and installing the ARM Tools. So let’s dive in and see what we can do with ARM templates.

I open Visual Studio Code on my machine, and as you can see I’m in my 100DaysRepo that I created and merged from GitHub. I’m going to create a new folder by clicking the “New Folder” button, and call it “Day7-AzureARMTemplates”:

Next, I’m going to create a new file within that folder, and call it “vmdeploy.json”

OK, so that’s my file. Now, I notice that the filename has brackets before it and is highlighted in green:

This is because I have the ARM Extension installed, and Visual Studio Code is recognizing this file as a JSON file.

A quick word on JSON — ARM Templates are JavaScript Object Notation (JSON) files that define the infrastructure and configuration for your project or deployment via code. This uses declarative syntax, and contains the following sections:

The template has the following sections:

  • Parameters — Provide values during deployment that allow the same template to be used with different environments.
  • Variables — Define values that are reused in your templates. They can be constructed from parameter values.
  • User-defined functions — Create customized functions that simplify your template.
  • Resources — Specify the resources to deploy.
  • Outputs — Return values from the deployed resources.

For a more detailed explanation, I’ll refer you to Microsoft’s official documentation found here.

Back to Visual Studio Code — in my new file, I type the word “arm”, and this is what happens:

I select the first option for “Azure Resource Manager (ARM) Template” and press enter. And this is what I get:

This is a standard JSON template that contains the options that I described above. But nothing about Azure yet though, so how can I deploy a Virtual Machine (or anything for that matter). In the “Resources” section, I’ll do a “CRLF” or “Return” to create a new line, and type in the word “arm again”:

Oh hello …. As I can see if I scroll through this list, there are multiple different deployments that can be used with ARM templates. However, I’ll scroll down and find “arm-vm-windows” and select that:

And when that happens, I get confronted with a wall of text that seems to go on and on and on …. :

170 lines in total! Can’t I just go back to the Portal or that one line PowerShell please?

No, I can’t. Because if I look really closely, its all broken up into sections that are easy to read. See the first 3 sections? Storage, Public IP Addresses and Network Security Groups. I’ve done that before! And in each of those sections, it contains things I recognize from our Portal and PowerShell posts, like Storage SKU, and Open Port ranges.

Further down, I see options for Virtual Network, Subnet, Network Interface, and Virtual Machine. All again with the same sets of parameters and options that were available in both previous posts:

The other thing I’m seeing is that the word “windowsVM1” is highlighted across the entire JSON. That’s because this is the default. And if I wanted to, I could deploy this as is into my Azure Subscription and it would do exactly the same as the short “New-AzVM” command did — it will create any resources that do not already exist using this naming convention. I’ll leave this in place for the purposes of the Demo, but would advise you to change this using your own company or personal naming convention prior to deployment.

The only thing that this won’t create is a Resource Group — I need to use either PowerShell or Azure CLI to do this. I’ll run the PowerShell command to do this:

New-AzResourceGroup -Name MyExampleARMRG -Location northeurope

One final thing I’m going to do is create a parameter for the Admin Password for the VM so that it prompts us during deployment to enter it. So back at the top of the template in the “parameters” section, I’ll do a return and type “arm”:

This gives me a dropdown to create a parameter. When I click on that, it gives me the layout to create the parameter:

I’ll change the parameter name to “adminPassword”, and also change the description:

Now, I need to scroll down and find the “adminPassword” in the resource section, and its under the Virtual Machines resource.

I’m going to delete the “adminPassword” description and call my parameter. To do this, I create square brackets [] and enter the letter “p”. As I can see, this gives me options to pick from and “parameters” is one of them:

Once this is in, I then put in regular brackets (), and this gives me a list of the parameters I have defined. And this gives me the adminPassword parameter to accept here:

OK, so that’s my template file ready, isn’t it? Well, it is and we can deploy like this, but unless I go hunting in the portal for the PublicIPAddress, I’m not going to know where to connect to. To get around this, I’ll go down to the bottom of the template and create an Output to output the PuclicIPAddress to the screen. I do a return and type “arm”:

And this gives me new JSON format.

I’ll change “output1” to “PublicIPAddress”. In the “value” field, I start with square brackets [] and enter “r” and select “reference”:

Now, I need some regular brackets () and this gives me another list — I need to select “resourceId” from this list:

I now need another set of regular brackets (), and this now gives me a list where I can select the PublicIPAddress Type (or any other resource type if I wish):

The “resourceId” combines the resource type and name. So I need to put a comma after the type (still within the regular brackets), and this finds the name of my PublicIPAddress from the json:

And that’s it! I can save this and go to PowerShell to deploy. Or I can do this from within Visual Studio Code by click the “Terminal” menu and selecting “New Terminal”. This will open a PowerShell Terminal at the bottom of the screen, and it defaults to the folder location of our Repository:

OK, so lets deploy. Instead of using “New-AzVM” as I did in the PowerShell post, I need to use “New-AzResourceGroupDeployment” command instead as I’m deploying directly to an existing ResourceGroup:

New-AzResourceGroupDeployment -ResourceGroupName MyExampleARMRG -TemplateFile .\Day7-AzureARMTemplates\vmdeploy.json

And this prompts me for the “adminPassword” which I enter.

And it fails! Ah, so I do need to change the “WindowsVM1” defaults.

So I go back into the file and replace “WindowsVM1” with something else. Then I’ll re-run the deployment:

New-AzResourceGroupDeployment -ResourceGroupName MyExampleARMRG -TemplateFile .\Day7-AzureARMTemplates\vmdeploy.json

Looks better, no errors returned as of yet ….

OK, so this time I have another failure, but its associated with the output that I created:

But the deployment did work as I can see the resources available in the Portal:

Hmmm, need to work out what’s gone wrong here. Its not exactly gone “wrong” as the deployment was successful, but I’d like to have this working fully without errors…..

So have scrabbled around this for a while, I finally worked out what was wrong (and its the lack of programmer in me that was at fault).

Firstly, the “outputs” section — I can’t just call this “PublicIPAddress” as that’s what the error was saying. So I tried calling it “publicIP” and this seems to have worked. From the official Microsoft documentation, the “outputs” name value needs to be a valid JavaScript identifier. I don’t have a list of accepted values and can’t find it anywhere, so if anyone does come across it, please drop a link into a comment and I’ll update the post!

Secondly, for the resource identifier, I needed to output a string value as this is what the output was expecting. So I needed to add “.dnsSettings.fqdn” to then end of the resourceId.

So the entire outputs section now looks like this:

And when I run the deployment again, everything now works and it outputs the DNSName of my VM to connect to.

So lets try it:

And it connects!

In this instance, I’m just going to delete all resources immediately by running

Remove-AzResourceGroup -Name MyExampleARMRG

Final thing I need to do in Visual Studio Code is Commit and Push my changes into Github (See Day 1 for details). It important to do this with all of the projects you undertake as its means it reusable.

And that’s how to deploy an Azure VM using ARM templates! ARM templates are powerful and can be used not for just Virtual Machines, but any type of Azure Deployment.

I hope you enjoyed this post (despite the hiccups!). Until next time!!

100 Days of Cloud — Day 6: Deploy a Virtual Machine using Azure PowerShell

Its Day 6 of my 100 Days of Cloud journey, and today I’m going to do the same thing as I did yesterday ….. Except differently.

In Day 5, I deployed a Virtual Machine into Azure using the Portal, and tried to explain the process along the way. Today I’m going to follow the same process using PowerShell, meaning what took endless clicks (and scrolling on your part dear reader) can now be done with just one command.

That’s right Mr. Wonka, just one command will do it all. There’s a lot of options we need to be aware of though, so let’s jump into PowerShell and take a look.

I’ll open PowerShell and connect to my Azure Account using

Connect-AzAccount

The command I need to run here is

New-AzVM 

Similarly, I can run

Get-AzVM

to see what VM’s are in our Subscription, so let’s run that first:

That shows me the VM I created yesterday in the Portal. This gives me a clue about parameters I need to use in order to create my new VM. As with all PowerShell Modules, there is extensive help available, so I’ll run the

get-help New-AzVM

command to see the options and parameters I can use:

Good lord …. That’s a lot of information. And when it flashes up on the screen like that in a wall of text, it can seem a bit intimidating. However, some key things to look for here.

Firstly, under “SYNTAX”, I can see the list of parameters we can feed into “New-AzVM”. These will look familiar to us as it’s exactly what I used to create the VM in the Portal. If I take the first 2 parameters alone, these are familiar to me:

I know what my Resource Group name is, as I created that using PowerShell during Day 3. I also know that my location is “northeurope”, as if I scroll up I see that outputted in “Get-AzVM”.

So effectively, the command I would use here for this portion of the PowerShell Command is this:

Secondly, if I look at the “REMARKS” section there are other commands I can run, one of which will give me examples! Lets run that and see what it returns:

I can see there are a number of examples there, but the first one is just asking for a Name parameter for the VM, and my credentials? Surely it can’t be that easy? Lets try — I’m going to run this with a “-Verbose” switch so we can get some output:

New-AzVM -Name MYPSTestVM -Credential (Get-Credential) -Verbose

I get a prompt for credentials — what I need to provide here is the Local Admin credentials I want to use on the VM. I provide this and click OK:

Hold on a second, I didn’t tell it to do any of this! But eventually after a few minutes it finishes:

The reason this created without any parameters or input is that Azure uses a set of default values when it doesn’t receive any input. Based on the name of the VM, Azure creates a Resource Group, Virtual Network, Disk, Network Interface and a Network Security Group. These are all created in the East US location, use the most Standard VM Size profile, the most Standard OS (which is Windows Server 2016 Datacenter), and a Premium Locally Redundant SSD. If I run

Get-AzVM

And check the Portal, there’s the new Resource Group:

And if I click into that, there’s all my resources:

This is why it’s important to understand the parameters and provide them correctly so that the Virtual Machine we create is the one we want. If I check that machine in the Azure Pricing Calculator, along with a Windows License and the SSD, it’s going to cost me nearly €160 per month.

Thanks for the defaults Microsoft, but as this is for the purposes of testing, I’m going to dele ……

No, wait, I’m not. I’m going to leave it running for a few hours to see if it generates some data in Cost Management and some Budget Alerts (I’ll report back on this in the next post!).

So moving forward, what I now want to do is create my new VM in the correct Resource Group using the options that I want. And if I look at the “-Verbose” output that I received when I created the first VM, its gives me a guide as to what parameters and options I want to have. So, What I want to do is specify the following options:

  • Resource Group — MyExamplePowerShellRG2
  • Location — northeurope
  • Name — MyPowerShellVM (This is the name of the VM in the portal, not the local Computer name)
  • AddressPrefix — “10.30.0.0/16” (This is for the Virtual network)
  • SubnetName — PSVMSubnet
  • SubnetAddressPrefix — “10.30.30.0/24”
  • PublicIPAddressName — PSPublicIP
  • DomainNameLabel — psvm001md (This is the local
  • SecurityGroupName — PSVMNSG
  • OpenPorts — 3389 (We can change this in the NSG later, but this is for RDP Connectivity, or SSH for Linux)
  • ImageName — “Win2016DataCenter”
  • Size — “Standard_B2s” (This is the VM Size. A full list can be found here)
  • OSDiskDeleteOption — Delete (This specifies whether the OS Disk is deleted or detached and retained when the VM is deleted. Options are Delete, Detach)

If I reference the “get-help” for the command again, or indeed the official Microsoft Docs article for the “New-AzVM” command, we can see these are only a few of the options available to use, but are probably the most common ones. So with those options, my PowerShell command should look like this:

New-AzVM -ResourceGroupName -MyExamplePowerShellRG2 -Location northeurope -Name MyPowerShellVM -AddressPrefix "10.30.0.0/16" -SubnetName PSVMSubnet -SubnetAddressPrefix "10.30.30.0/24"  -PublicIPAddressName PSPublicIP -DomainNameLabel PSVM001MD - SecurityGroupName PSVMNSG -OpenPorts 3389 -ImageName Win2016Datacenter -Size Standard_B2s -OsDiskDeleteOption Delete -Credential (Get-Credential) -Verbose

Again, I’m adding a “-Verbose” and clicking enter:

And it’s done. So let’s run “Get-AzVM” to see if it created successfully:

Yep, all looking good there. So lets check the Portal now:

All looking good there! So now let’s get a connection to my VM:

And I’m in. Like yesterday, I’ll turn off RDP Access via the NSG just for extra security.

Now, let’s delete the VM. I’ll run

get-help Remove-AzVM

to check the options:

OK, so all I need is the Resource Group and the VM Name it seems. From the output to “Get-AzVM” above, the name of my VM is “MyPowerShellVM”. So I’ll run:

Remove-AzVM -ResourceGroupName MyExamplePowerShellRG2 -Name MyPowerShellVM

So I say yes to the prompt. And it fails!

And if we read the error, it’s because there’s a lock on the Resource Group which we put there on Day 3 when we created it! So we need to remove that first by running:

Remove-AzResourceLock -LockName LockPSGroup -ResourceGroupName MyExamplePowerShellRG2

And now, lets try running our “Remove-AzVM” command again:

And this time worked successfully. Let’s check the Portal:

And I see that both the Virtual Machine and the Disk have been deleted.

And that’s the PowerShell Method! For the next day, I’m going to delve into ARM Templates as promised to show how they can automate this process even further.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 5: Deploy a Virtual Machine using the Azure Portal

Welcome to Day 5 of 100 Days of Cloud. It’s the one you’ve all been waiting for! Yes, that’s right, today I’M ACTUALLY GOING TO DEPLOY SOMETHING!!! The prep work was important, but now it’s time to get down to the nuts and bolts and deploy some resources.

What I’ve tried to do up to now has to show the multiple different ways we can manage Azure (Portal/PowerShell/CLI). I haven’t touched on ARM Templates yes, but that’s going to be Day 6. Today, I’m going to deploy a virtual machine in the Azure Portal only. While the Portal is the more long-winded, clickety-click way of doing things, it’s also the most informative way to start off deploying in Azure, as you go through the Steps one-by-one and can see exactly what’s happening from both an options and a Cost Perspective.

I hope to be able to demonstrate over the coming days how to deploy a number of Virtual Machines to our Resource Groups that we created in Day 2 using all of the methods available, be able to see how this affects both Actual and Forecasted budgets that were set up in Day 3 (and hopefully get some alerts to generate), and to then apply some different types of RBAC Assignments that we talked about in Day 4 to see how the different assignments affect the rights users have over the Resource Groups and Resources.

Let’s jump into the Azure Portal and get started. Search for “Virtual Machines” in the search bar, and click to open the Virtual Machines window:

I can see that there are no Virtual Machines active in my Subscription. Click on the “Create” button and select “Virtual Machine” to get started:

Basics

This opens the “Basics” tab. Here under “Project Details”, I’ll select the Resource Group where I want the Virtual Machine to be created in. I created the “MyExamplePortalRG” during Day 2.

Next in the “Instance Details” section, I need to provide the following:

  • Virtual Machine Name — this can be anything, but needs to be both authentic in your subscription and easily identifiable.
  • Region — The VM is automatically placed in North Europe as this is where the Resource Group is located.
  • Availability Options — there are 3 options we can select here:
  1. No infrastructure redundancy required — this will be a standalone VM with no redundancy

2. Availability Zone — you can have replicated copies of the VM running in different datacenters within the same Azure region

3. Availability Set — This is a logical grouping of 2 or more VMs that allows Azure to understand how your application is built for redundancy. The VMs are isolated across different Fault Domains (Racks/Data Center/Storage/Network) and Update Domains (Updates are staged to occur at different times across the set, thus ensuring the availability of the Application/VM).

  • Image — this is the Operating System I want to run. I need to have a license for the OS I want to use if required
  • Azure Spot Instance — this is unused Azure Instances that can be used at a discounted rate. Not suitable for Production workloads
  • Size — this is the Size of the VM. As we can see, there are multiple options with different price ranges available (and even more if we click “See all sizes”), all based on the amount of vCPUs and Memory we need

By the end of this section, I now have these options:

In the next sections, we need to fill in the following:

  • Administrator Username and Password
  • Inbound ports that we want to have open to the machine — in an ideal world, we would have an Azure Bastion host to use a jump box to connect to your VMs. For now, we’ll leave RDP (3389) open.
  • Licensing — this asked you to confirm that you have a license for your OS of choice.

Now click on the “Next: Disks >” button and we’ll move on to Disks!

Disks

There are just 2 options on the Disks Page:

  • OS Disk Type — this can be a HDD or SSD, and can also be Locally or Zone redundant. We can see all types and explanations when we hit the drop-down:
  • Encryption Type — Azure encrypts all Storage by default. We can choose to use the Default Azure Provided Encryption, our own managed key or a mixture of both:

We can also select additional disks if required by your VM. Now we have the required options selected, we can move on to Networking:

Networking

On the Networking Tab, we define Virtual Networks, subnets and IP’s for use with our Virtual Machine. As we can see, Azure will create a new Virtual Network, Subnet and Public IP Address based on the name of our resource group. We can also define Public Inbound Ports, Network Security Groups and Load Balancing. I’m going to deal with this in future posts on Networking and Security.

For now, we’re just going to take the defaults here and move on to Management.

Management

Management allows us to configure management and monitoring options for our VM, such as:

  • Boot Diagnostics — used to diagnose boot failures
  • Identity — used to grant RBAC permissions to System-Managed Identities using Credentials stored in Azure KeyVault
  • Azure AD Authentication — used on-prem AD credentials to log in, enforce MFA, or enable RBAC Roles
  • Auto-Shutdown — configures the VM to automatically shut down daily.
  • Patching

As this is a test VM, I’m going to keep the default options (which doesn’t use any of these features apart from patching), but it’s useful to know these options are available.

Advanced

The Advanced tab allows us to specify scripts, agents or applications to add to the VM automatically. We can also specify the VM Generation to use:

Tags

The Tags tab allows us to apply Name/Value Pair Tags to specific resources within the VM. This may be used for the likes of billing to apply different tags to different groups of resources. For example, I could add tags to the following:

  • Tag VMNetGRP could be applied to Network Interface, Public IP Address and Virtual Network
  • Tag VMTagGRP could be applied to Auto-Shutdown and Virtual machine
  • Tag VMStrGRP could be applied to Disk and Storage Account

We’re finished all of our options here, so now lets move on to Review + Create.

Review + Create

This page gives me a final list of all the options I selected throughout the process, including pricing. Note that at the bottom of the page, it also gives us an option to “Download a template for automation”. This is important and is absolutely something you should do, as it does exactly what it says — provides us with a JSON template for automating the deploying of this exact same type of VM if we wish to deploy another one:

Lets click Create and see what happens:

We can see that Azure creates each component of the Virtual Machine (Storage, Virtual Network, IP, and the VM itself) one by one. We’ll eventually get an alert to say when it’s completed. If you had forgotten to download the automation template on the previous page, no problem — click on the Template menu which gives us the template in JSON format for download:

We’ll look more at JSON in the next post, where we’ll use Visual Studio Code to view this and make changes if required.

The Finished Product!

Finally, an alert to say the deployment succeeded:

Now, let’s jump into our Resource Group, and we can see all of our resources are available:

We click into the VM to look at the settings:

Let’s click on the “Connect” button — this will give us the option to use RDP, SSH or Bastion. I’ll choose RDP:

And this will give us a link to download an RDP File:

Click Connect:

I get prompted for credentials:

And I’m in!!

Final thing to do here — because this is a Test-VM, I’m going to disable RDP for Security reasons. So in the portal, I go back into the Virtual Machine. On the Menu at the side, I click “Networking”. This brings me into the Network Security Group for the VM:

I can see that RDP is set to Allow, so I’m going to click on “Allow” in the Action Column, and set the RDP policy to “Deny”:

Now, I’ll try to connect to the VM again:

Exactly what I wanted to see.

Conclusion

And that is how we create a Virtual Machine in the Azure Portal. Next time, I’m going to do this all again, but this time using Azure PowerShell, and the JSON template that I downloaded. I’ll also come back into Cost Management to see how this VM affects my Budget.

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 4: Azure Active Directory and RBAC Roles

In today’s post on my 100 days of Cloud journey, I’m going to talk about Azure Active Directory and RBAC Roles.

Anyone who has followed me so far on the journey is probably asking why I haven’t deployed or built anything yet. Isn’t that what the whole “100 Days” challenge is all about? I’m getting there, I will be deploying something in the coming days! But its important to firstly prepare our environment for use and understand the different layers and base level requirements before we build anything.

In Day 2, I created an Azure Account on the Portal and set up Cost Management and Budget Alerts. Day 3 talked about Resource Groups where we can group resources that are relevant to Project Teams, Departments or Locations.

Azure Active Directory

Every Azure environment is built on top of an Azure Active Directory
(Azure AD) tenant. Azure AD is Microsoft’s cloud-based identity and access management service, and when you sign up for an Azure Subscription, you automatically get an Azure Active Directory instance.

Now lets stop here for a minute because something sounds familiar here …. Active Directory! I know all about that! Domains, Hierarchy, GPO’s!

No. Its not the same as the Active Directory that on-premise admins would be used to managing. Active Directory has a Hierarchical Structure, where you can create OU’s relevant to Locations or Jobs Roles, add users, groups or computers to those OUs and manage those elements using Permissions Assignments or Group Policy Objects.

Azure Active Directory still has Users, Groups, Authentication and Authorization, but it uses the concept of Role Based Access Control (RBAC). There are a large number of predefined RBAC Roles defined, and I’ll try to explain how those work in the last section.

A quick note first — even though Active Directory and Azure Active Directory are distinctly different from an architecture perspective, they can talk to each other. The best real world example of this is in a Hybrid Office 365 deployment, where you use Azure AD Connect to sync on-premise users to Azure Active Directory for use with services such as Exchange Online, SharePoint and Teams.

Use Case

RBAC allows you to grant users the exact roles they need to do their jobs while striking the right balance between autonomy and corporate governance.

Lets get our use case for this — like Day 3, I want to run a Virtual Machine and its needs to run in a specific region (eg East US). I would create a Resource Group in East US, then create the resources required for the Virtual Machine (Storage Account, Virtual Network, and the Virtual Machine itself) within that Resource Group. However, the machine is running an Application, so it needs both a Website at the Front End, and an SQL Database at the back end to store the application data.

As you can see, we have a large number of responsibilities and different technologies in play here. What RBAC will allow us to do with this scenario is as follows:

  • Allow one user to manage virtual machines and another user to manage virtual networks in the entire subscription.
  • Allow a database administrator group to manage SQL databases in the resource group only.
  • Allow a user to manage all resources in a resource group, such as virtual machines, websites, and subnets
  • Allow an application to access all resources in a resource group

How RBAC works

RBAC works using the concept of Role Assignments, which controls how permissions are enforced. This uses 3 elements, which are:

  1. Security Principal (Who) — a user, group or application that you want to grant access to.
  2. Role Definition (What) — this is a collection of permissions. Roles can be high level (eg Owner) or specific (eg Virtual Machine Contributor).

Azure includes several built-in roles that you can use. The following lists four fundamental built-in roles:

  • Owner — Has full access to all resources, including the right to delegate access to others.
  • Contributor — Can create and manage all types of Azure resources, but can’t grant access to others.
  • Reader — Can view existing Azure resources.
  • User Access Administrator — Lets you manage user access to Azure resources.

If the built-in roles don’t meet the specific needs of your organization, you can create your own custom roles. A full list of built-in roles can be found here.

3. Scope (Where) — Scope is where the access applies to. You can apply the scope at multiple levels:

  • Management Group
  • Subscription
  • Resource Group
  • Resource

When you grant access at a parent scope, this is inherited by all child scopes.

RBAC is an allow-based model — this means that if you apply “Virtual Machine Reader” at Subscription Level and “Virtual Machine Contributor” at Resource Group Level, you will have Contributor rights at the Resource Group level

Conclusion

And that my friends is a very high level overview of how Azure Active Directory and RBAC works. In the coming days, I’ll be using RBAC to control access to the items I deploy along the cloud journey. Yes, we’re close to deploying something! Maybe next time — come back and find out!

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 3: Azure Resource Groups

In today’s post on my 100 days of Cloud journey, I’m going to talk about Resource Groups in Azure and why they’re important.

Resource Groups are containers that hold related resources in an Azure Solution. Let’s say I want to run a Virtual Machine and its needs to run in a specific region (eg East US). I would create a Resource Group in East US, then create the resources required for the Virtual Machine (Storage Account, Virtual Network, and the Virtual Machine itself) within that Resource Group. This can be used to identify resources for a department or location for Billing Purposes.

I touched briefly on Resource Groups in yesterday’s post on Costs Management when I talked about assigning a budget to a resource group.

Sample Use Case

Let’s use an example to make this a bit clearer from a Cost Management perspective — your company has an Azure Subscription and has allocated a budget of $50000 a month. So, they set up a Budget Alert for that total against the Subscription. The company has 4 Departments — Accounts, Manufacturing, R&D and Sales.

The R&D Section is allocated its own Resources, and therefore gets its own R&D Resource Group with resources such as Virtual Machines within that. A budget of $10000 is allocated, and a Budget Alert Condition is set up in Azure against the R&D resource Group.

You can set up Resource groups in 3 ways — Azure Portal, Azure PowerShell and Azure CLI

Azure Portal Method

In the Azure Portal, search for Resource Groups in the Search Bar:

Click “Create”

On the “Basics” tab, select the Subscription you wish to place the Resource Group in, the Name for the Resource Group and the Region you wish to place the Resource Group in:

Click on the “Tags” tab — you can choose to create Tags on your resources. These will show up on your Billing Invoice meaning you can have multiple departments in the same Resource Group and bill them separately. We’ll leave this blank for now and discuss Tags in a future post. Click “Review and Create”:

And after less than a minute, the Resource Group shows as created:

What we’ll see in later posts is when we create Azure resources such as Virtual Networks and Machines, we have to place these in a Resource Group during creation.

And that’s the Portal way to do it! Onwards to PowerShell!

Azure PowerShell Method

In Day 2, we installed the Azure PowerShell Modules. So we need to run our

Connect-AzAccount 

command again to load the login prompt and sign into our Azure Account:

We can see we’re getting a warning about MFA (we’ll deal with that in a later post on Security), but this has connected us to the Tenant:

If we run

Get-AzResourceGroup

it shows all of the existing Resource groups in our subscription, including the one we created above in the Portal:

To create a Resource Group, its one command:

New-AzResourceGroup -Name MyExamplePowerShellRG -Location NorthEurope

And if we run the “Get” command again, we can see it there:

And also visible in the Portal:

To delete a Resource Group using PowerShell, its simply

Remove-AzResourceGroup

with the name of the group. And again we’ll run “Get” to confirm its gone:

Pretty slick, isn’t it. This needs to come with a warning though — deleting a Resource Group also deletes all resources contained within the Group. Permanently.

Luckily, we can apply “Locks” to Resource Groups or Resources to prevent them being deleted. We can specify 2 levels of locks:

  • CanNotDelete — means users can read and modify the resource, but cannot delete it
  • ReadOnly — means users can read the resource, but cannot modify or delete it

Locks can be used in conjunction with Azure RBAC (Role-Based Access Control) — again, we’ll cover that in a future post on Security.

So, lets create another Resource Group, and if we run

Get-AzResourceLock

we see there are no locks associated:

And lets run the following command to create the lock:

New-AzResourceLock -LockName LockPSGroup -LockLevel CanNotDelete -ResourceGroupName MyExamplePowerShellRG2

If we run

Get-AzResourceLock

It gives us the same info as above:

So now, lets try and delete the Resource Group. I’ll run

Remove-AzResourceGroup -Name MyExamplePowerShellRG2

And it fails because there is a lock on the resource group, which is exactly what we wanted to see!

Azure CLI Method

Azure CLI is a cross platform tool that can be used on Windows, Linux or macOS Systems to connect to Azure and execute commands on Azure resources. The link below gives instructions on how to Install Azure CLI for your system of choice:

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

Once we have Azure CLI Installed, we run

az login

in PowerShell or Command Prompt. This will redirect us as above to a browser asking us to login to the Portal. Once this is done, it returns us to the PowerShell Window:

So, in short, similar results as above, but different commands. To list the Resource Groups, run

az group list

To create a Resource Group, run

az group create

To create a lock, it

az lock create

And to delete a Resource Group (which should fail after creating the lock), the command is

az group delete --name MyExampleCLIRG

And as we can see it fails as expected.

Conclusion

As you noticed, I ran through the Azure CLI section as I’m using different commands to achieve the same result as the PowerShell section. I haven’t used Azure CLI a lot, as (like most people from a Microsoft System Admin background) I’m more of a PowerShell person traditionally. But as we’re using Azure resources in later posts, I’ll try to use it more as there will come a day when I’ll need it.

And that’s all for Day 3! Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 2: Azure Budgets and Cost Management

One of the most common concerns raised when any organization is planning a move to the Cloud is Cost. Unlike Microsoft 365 where you have set costs based on license consumption, there are a number of variables to be considered when moving to any Cloud Provider (be that Azure, AWS or others).

For example, let’s say we want to put a Virtual Machine in the Cloud. Its sounds easy — if this was on-premise, you would provision storage on your SAN, assign CPU and Memory, assign an IP Address, and if required purchase a license for the OS and other additional software that will be running on the Virtual Machine.

All of the above still holds true when creating a Virtual Machine in the Cloud, but there are also other considerations, such as:

  • What Storage Tier will the VM run on (Standard HDD, Standard SSD, Premium SSD)
  • How HA do we need the VM to be (Locally Redundant, Geographically Redundant)
  • Does the VM need to be scalable based on demand/local (Auto Scaling/Scale Sets)

In an on-premise environment, there needs to be an up-front investment (CAPEX) to make that feasible. When running with a Cloud Provider such as Azure, this uses an on-demand model (OPEX). This is where costs can mount.

There are a number of ways to tackle this. The Azure TCO (Total Cost of Ownership) Calculator gives an estimate of costs of moving infrastructure to the cloud. The important word there is “estimate”.

So you’ve created your VM with all of the settings you need, and the TCO has given you the estimate for what total “should” be on your monthly invoice. Azure Cost Management and Budgets can provide you with forecasting and alerts with real-time analysis of your projected monthly spend. That way, there are no nasty surprises when the invoice arrives!

Firstly, lets create our Azure Account. Browse the Azure Portal to sign up. You get:

  • 12 months of free services
  • $200 credit for 30 days
  • 25 always free services

Azure Portal Method

When your account is set up, go to https://portal.azure.com to sign in:

Once you’ve signed in, you can search for “Cost Management and Billing”

From the “Cost Management + Billing” page, select “Cost Management” from the menu:

This brings us into the Cost Management Page for our Azure Subscription:

One important thing to note here before we go any further. We can see at the top of the screen that the “Scope” for the Cost Management is the Azure Subscription. In Azure, Budgets can be applied to the following:

  • Management Group — these allow you to manage multiple subscriptions
  • Subscriptions — Default
  • Resource Groups — Logical groups of related resources that are deployed together. These can be assigned to Departments or Geographical Locations

Also, we can create monthly, quarterly or annual budgets. For the purposes of this demo (and the entire 100 Days), I’ll be using Subscriptions with a monthly budget.

Click on the “Budgets” menu option, and then click “Add”:

This brings us into the “Create Budget” menu. Fill in the required details and set a Budget Amount — I’m going to set €50 as my monthly budget:

Next, we need to set up Alert Conditions and email recipients. In Alert Conditions, we can see from the “Type” field that we can choose either Actual or Forecasted:

  • Actual Alerts are generated when the monthly spend reaches the alert condition.
  • Forecasted Alerts are generated in advance when Azure calculates that you are likely to exceed the alert condition based on the services you are using

Once you have your Alert Conditions configured, add one or more Alert Recipients who will receive alerts based on your conditions. Then click “Create”:

And now we see our budget was created successfully!

So, that’s the Azure Portal way to do it. There are 2 other ways, the first is using Azure PowerShell.

Azure PowerShell Method

Firstly, we need open Windows PowerShell, and install the Azure Module. To do this, run:

install-module -name Az

This will install all packages and modules we require to manage Azure from PowerShell.

We can then run the following commands to create our Budget:

Connect-AzAccount

will prompt us to log on to our subscription:

Once we are logged in, this will return details of our Subscription:

Run

Get-AzContext

to check what level we are at in the subscription:

Now, we can run the following command to create a new budget:

New-AzConsumptionBudget -Amount 100 -Name TestPSBudget -Category Cost -StartDate 2021–09–17 -TimeGrain Monthly -EndDate 2023–09–17 -ContactEmail durkanm@gmail.com -NotificationKey Key1 -NotificationThreshold 0.8 -NotificationEnabled

But it throws an error! Why?

It turns out that after a bit of digging, you can only set a budget using PowerShell if your subscription is part of an Enterprise Agreement. So I’m afraid because I’m using a free account here, its not going to work ☹.

Full documentation can be found at this link:

https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-create-budgets#create-and-edit-budgets-with-powershell.

OK so lets move on to option 3, which is using Azure Resource Manager (ARM) Templates.

Azure Resource Manager (ARM) Templates Method

To do this, go to the following site:

https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/quick-create-budget-template?tabs=CLI

And click on the “Deploy to Azure” button:

This will re-direct us into the Azure Portal and allow us to fill in the fields required to create our Budget:

And that is how we create a Budget (3 ways) in Azure. See you on Day 3!!

Hope you enjoyed this post, until next time!!

100 Days of Cloud — Day 1: Preparing the Environment

Welcome to Day 1 of my 100 Days of Cloud Journey.

I’ve always believed that good preparation is the key to success, and Day 1 is going to be about setting up the environment for use.

I’ve decided to split my 100 days across 3 disciplines:

  • Azure, because it’s what I know
  • AWS, because its what I want to know more about
  • And the rest of it …. This could mean anything: GitOps, CI/CD, Python, Ansible, Terraform, and maybe even a bit of Google Cloud thrown in for good measure. There might even be some Office365 Stuff!

It’s not exactly going to be an exact 3-way split across the disciplines, but let’s see how it goes.

Let’s start the prep. The goal of the 100 Days for me is to try and show how things can be done/created/deleted/modified etc. using both GUI and Command Line. For the former, we’ll be going what it says on the tin and go clicking around the screen of whatever Cloud Portal we are using. For the latter, it’s going to be done in Visual Studio Code:

To download, we go to https://code.visualstudio.com/download , and choose to download the System Installer:

Once the download completes, run the installer (Select all options). Once it completes, launch Visual Studio Code:

After selecting what color theme you want, the first place to go is click on the Source Control button. This is important, we’re going to use Source Control to manage and track any changes we make, while also storing our code centrally in GitHub. You’ll need a GitHub account (or if you’re using Azure GitOps or AWS Code Commit, you can use this instead). For the duration of the 100 Days, I’ll be using GitHub. Once your account is created, you can create a new repository (I’m calling mine 100DaysRepo)

So now, let’s click on the “install git” option. This will redirect us to https://git-scm.com, where we can download the Git installer. When running the setup, we can do defaults for everything EXCEPT this screen, where we say we want Git to use Visual Studio Code as its default editor:

Once the Git install is complete, close and re-open Visual Studio Code. Now, we see we have the option to “Open Folder” or “Clone Repository”. Click the latter option, at the top of the screen we are prompted to provide the URL of the GitHub Repository we just created. Enter the URL, and click “Clone from GitHub”:

We get a prompt to say the extension wants to sign into GitHub — click “Allow”:

Clicking “Allow” redirects us to this page, click “Continue”:

This brings us to the logon prompt for GitHub:

This brings up “Success” message and an Auth Token:

Click on the “Signing in to github.com” message at the bottom of the screen, and then Paste the token from the screen above into the “Uri” at the top:

Once this is done, you will be prompted to select the local location to clone the Repository to. Once this has completed, click “Open Folder” and browse to the local location of the repository to open the repository in Visual Studio Code.

Now, let’s create a new file. It can be anything, we just want to test the commit and make sure it’s working. So let’s click on “File-New File”. Put some text in (it can be anything) and then save the file with whatever name you choose:

My file is now saved. And we can see that we now have an alert over in Source Control:

When we go to Source Control, we see the file is under “Changes”. Right-click on the file for options:

We can choose to do the following:

– Discard Changes — reverts to previous saved state

– Stage Changes — saves a copy in preparation for commit

When we click “Stage Changes”, we can see the file moves from “Changes” to “Staged Changes”. If we click on the file, we can see the editor brings up the file in both states — before and after changes:

From here, click on the menu option (3 dots), and click “Commit”. We can also use the tick mark to Commit:

This then prompts to provide a commit message. Enter something relevant to the changes you’ve made here and hit enter:

And it fails!!!

OK, so we need to configure a Name and Email ID in GitBash. So open GitBash and run the following:

git config — global user.name “your_name”
git config — global user.email “your_email_id”

So let’s try that again. We’ll commit first:

Looks better, so now we’ll do a Push:

And check to see if our file is in VS Code? Yes it is!

OK, so that’s our Repository done and Source Control and cloning with GitHub configured.

That’s the end of Day 1! As we progress along the journey and as we need them, I’ll add some Visual Studio Code extensions which will give us invaluable help along the journey. You can browse these by clicking on the “Extensions” button on the right:

Extensions add languages, tools and debuggers to VS Code which auto-recognize file types and code to enhance the experience.

Hope you enjoyed this post, until next time!!