Can we prevent Cloud Repatriation in Azure?

I’ve seen a lot of articles in the last few months talking about Cloud Repatriation, so I’ve decided to look into this more and find out more about:

  • What is Cloud Repatriation?
  • Why is it suddenly a topic?
  • Why its not as easy as it sounds?
  • How did this happen in the first place?
  • Why it should never become an issue?

What is Cloud Repatriation?

Lets start with the easy question and look for the definition of what it is. Repatriation is a term that has been around for a while and is defined in its simplest form as:

“the process of returning a thing or a person to its place of origin”

So if we take that definition and apply it to technology, Cloud Repatriation is the process of companies moving their services out of Microsoft Azure (or other Public Cloud providers such as AWS or GCP) and relocating those services back to the On-Premises or Private Cloud environments that they originated from.

Why is it suddenly a topic?

One word – cost. The cost of running a Cloud Computing environment isn’t the same as running an On-Premises environment.

In an On-Premises environment, we work with predictable cost models when it comes to Equipment, Licensing and Staffing costs. The only variable is Power which is in a constant state of flux and change. This leads us down the CapEx route which forces companies into predicting the costs involved over a 3-5 year period. Finance people love this as it means they can safely predict future costs and budgets, and not have to worry about unexpected charges affecting their balance sheets.

The first part of that previous paragraph is ambiguous. Unless your company is static with zero growth projections (and lets be honest, no company is), its going to be difficult to predict costs or a period of years:

  • How many servers will you need to run your estate? If you order too little, you’ll need to buy more and your CFO won’t like that after you told them that these were the only costs needed for the next 3 years.
  • If you order too much, its overspend and equipment/license wastage and you may not be approved for additional equipment in your next Budget cycle (which leads you to use unsupported and out of warranty equipment that may lead to more costs to keep that operational).
  • You may have also hired either too few staff (leading to overwork and burnout) or too many staff (which leads to idleness and ultimately reducing the workforce).

Cloud Computing environments use the OpEx which works differently in that it uses a Pay-As-You-Use model. You use a Cloud Service and are billed monthly for the cost of using it. You have options to scale the service up or down as required, and you can also purchase Reserved Instances or Savings Plans over a 3/5 year period in order to reduce the costs and have that “CapEx-feel” to Cloud Computing.

The problem is that there is no clearly defined way of keeping those costs consistent, and Microsoft’s recent announcement on price increases for European Customers (and depending on your currency, this was as much as 15%) has meant that CFOs and CTOs are scrambling to look at alternative solutions to the Cloud.

And in some cases, the word “Repatriation” has been thrown about and the question being asked is “were we wrong to move to Azure/AWS/GCP, and should we look to move our servers and data back?”

Why its not as easy as it sounds?

So you want to move back? It sounds easy, and if your Cloud Migration involved only a “Lift And Shift” or Rehost (where you migrated your VMs as-is and made no modifications to them), then fire away! Buy your equipment, install your favourite hypervisor and off you go! There are 3rd party products (such as Carbon) on the market that will bring your VMs back to either VMware or Hyper-V.

You can also migrate Office365 mailboxes back to On-Premises Exchange Servers by setting up a migration batch in EAC, so that process is simple.

But what if you did more than just Rehost? Lets remind ourselves of the 5 R’s of Cloud Rationalization:

  • Rehost – also known and Lift and Shift.
  • Refactor – customizing your apps and infrastructure to align with the Cloud.
  • Rearchitect – divides your app into different parts or MicroServices.
  • Rebuild – completely rebuild and redevelop your app.
  • Replace – completely replace the app with a cloud-native SaaS application.

If you’ve done anything more than Rehost during your migration to Azure, then you have a bit of work on your hands getting it back. It’s not impossible by any means but as with all Cloud Services, it’s a lot easier to get them into the Cloud than it is to get them out. If you’ve redesigned your app to make it Cloud-Native using any of the other 4 “R’s”, then you need to realise that you need to recreate that environment on your On-Premises, and that may not be easy and cost a lot more than it is running the service in Azure in the first place!

How did this happen in the first place?

To work out why this should never have become an issue, we need to go back through the mists of time and work out why the migrations happened in the first place. It was most likely down to either:

  • Running old and unsupported hardware.
  • Complex systems that were difficult to manage and maintain.
  • Enhanced Security.
  • Easier Scalability of services.

And if you moved to Azure, its likely that you used either :

  • Azure Site Recovery (and were using Azure as a DR platform to initially test how your VMs would work).
  • Azure Migrate (where you ran a discovery assessment on the load of your VMs over a period of time up to 30 days, and used that assessment as a means of sizing your target Azure VMs).

The original version of Azure Migrate only supported migration of VMware VM workloads to Azure. The new version (released in November 2019) included Database and Web Server migration features, and Application Discovery.

In all likelihood, some companies went down the same route as the initial Office365 migrations (where they only migrated Email and never used any of the other underlying services included in their licenses), and in doing their Cloud Migrations to Azure decided to effectively “Rehost-only” and not use the additional benefits that were available. So instead of running Web Servers or Applications as part of an Azure App Service, they may have been left running on VMs with underlying Web or App Services.

Another good example here is the Finance or Warehouse Management Application that ran on a VM and also required a dedicated SQL backend (that also ran on a VM). Instead of refactoring that into an App Service or a Serverless SQL Database, it was left running on VMs in Azure. We all know that these VMs have spikes at certain times every month, so in that case the scalability that could have offered cost savings wasn’t implemented.

Why it should never have become an issue?

There are a number of contributing factors why Cloud Computing costs can spiral out of control. I’ve made the case for these below, and in some cases what can be done to address them:

  • Azure Reserved Instances – this is what Finance people love as they immediate savings and some semblance of how they can “CapEx their OpEx” costs over a longer period of time.
  • Azure Cost Management – Setting a budget or at least budget alerts on monthly spend can at least give you an indication of where you are each month. If you’re getting budget alerts emails on the 10th of each month, then you haven’t got either your budget or your Service SKU’s and Sizing right.
  • Azure Policy – have you set policies to say that you can only have certain VM SKUs, running on certain disk types, in certain regions?
  • RBAC Roles – this is the most important one and the biggest factor in “spend-creep”. Who can do what in your Azure Subscription? For example, have you granted developers Owner access in their own Resource Group so they can spin up what they want? Changing a SKU on a VM is single click operation, as is changing Disk type from HDD to SSD, redundancy from LRS to GRS etc. And do the policies you have set above apply across the subscription or have you exclusions set somewhere? Having control of your environemnt and assigning the correct roles.
  • Assessments – OK, this is a “after the horse has bolted” scenario, but its never too late to do it. Asking questions like why did you move in the first place, does it align with business goals, strategy and governance objectives.
  • Azure Advisor – its there, on every resource you are running in Azure and also as its own page in the portal, giving you recommendations based on over/under consumption and how you can address this.
  • Backup/DR- this has long been a bone of contention for some companies and I’ve experienced some who see Cloud-based backup solutions as either unnecessary or too expensive (because being in the cloud means we don’t need Backup or DR, right?).

Conclusion

I’ve based this article purely on costs and how you can utilize the various Tools, Policies and Governance tools available in Azure that can help make final decisions on whether Cloud Repatriation is the right choice for your business.

Hope you enjoyed this post, until next time!

Control your Azure Virtual Desktop costs with Scaling Plans

Cloud Computing has changed the way we approach our enterprise infrastructure.

The amount of options available to us now means that we can finally ditch that dusty old server sitting the the bottom of the server rack (or in some cases at the back of a cupboard) for a modern secure solution that we don’t need to sit and pray in front of every time we need to restart it.

The Problem with the Cloud

But …. some people would prefer to keep old “Dusty Springfield” alive because the effort to migrate and in some cases re-architect the service is too much and too costly. And thats the thing we hear the most when a suggestion to migrate to a cloud service is raised – “the cloud is very expensive…”.

And lets be honest, it is …..

Money money money ……

There, I said it. Out Loud. In Print. Cloud Computing is expensive. There’s a helicopter hovering over my house at the minute but I’m sure its nothing to worry about ……

In all seriousness though, when scoping out a Cloud solution the first thing that is looked at is cost. You can argue as much as you want about the redundancy, the lower power and cooling costs, lack of hardware costs etc. The bean counters will look at the bottom line and say “we’re not paying that much now….”. And “Dusty Springfield” limps on defiantly in corner.

Of course, your cloud computing costs are defined by the options you select and what level of redundancy you need. Scale Sets, Storage redundancy across zones and regions. Or just keep it as locally redundant storage? Then you get into the sizing of your solutions.

How the Costs add up

Azure Virtual Desktop is one of those cool technologies that can help you provide a secure environment for your users to access Cloud or Hybrid environments in a consistent and unified experience. But because its built on underlying VMs which you need to size based on your requirements, the costs can mount up.

Lets take a look at an example of a standard Azure Virtual Desktop host pool that contains 10 Session Hosts which are delivering Remote Apps to 100 users. The Session Hosts are generally sized from the General Purpose VM type and the most common one used is the “Standard_D4s_v3”, which has 4 vCPU’s and 16GB memory.

The base cost for this VM if you create a standard Azure Virtual machine comes in at approx $160 per month.

Standard Virtual Machine Type

However, if we use this VM type for our Azure Virtual Desktop Session Hosts with Windows 10 Enterprise Multi-Session version 21H2 with Microsoft 365 Apps installed, the cost then jumps to $290 per month.

Azure Virtual Desktop Virtual Machine Type

So, lets go back to our 10 Session Hosts – at that price we’re talking $2900 per month, or just under $35000 per year. And thats for just 10 VMs in the environment. And thats why Cloud Computing is expensive! Of course, this doesn’t take into account reserved instances or spot instances, but you get the idea.

The $290 per month cost for a VM isn’t based on a cost per month – its based on 730 hours of usage or 24 hours multiplied by just over 30. This where you can start cutting into that $35000 per year cost, and where Scaling Plans applied to your Azure Virtual Desktop Host Pools can help.

Scaling Plans

Scaling Plans lets you scale your session host virtual machines (VMs) in a host pool up or down to optimize deployment costs. You can create a scaling plan based on:

  • Time of day
  • Specific days of the week
  • Session limits per session host

You follow the guidelines below when creating your scaling plan:

  • At the time of writing, you can only configure autoscale with existing Pooled host pools. This won’t work with Personal host pools
  • You must create the scaling plan in the same Azure region as the host pool you assign it to.
  • All host pools you use with autoscale must have a configured MaxSessionLimit parameter. Don’t use the default value.
  • You must grant Azure Virtual Desktop access to manage the power state of your session host VMs.

Create a custom RBAC role

Now that we know the benefits and rules, the first thing we need to do is create a custom RBAC role. This custom role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in those subscriptions. It will also let the service apply actions on both host pools and VMs when there are no active user sessions.

The steps for creating the Custom RBAC Role are as follows (this is the same for creating any Custom RBAC Role):

  • First, create a json file using whatever your favourite editor is (I’m using Sublime in this example). Save the file as avdscale.json and add the following information into it:
  • Open the Azure portal and go to Subscriptions and select a subscription that contains a host pool and session host VMs you want to use with autoscale. Select Access control (IAM). Select the + Add button, then select Add custom role from the drop-down menu.

  • On the “Basics” screen, go to Baseline permissions and browse to the avdscale.json file that you just created.
  • This will import all of your settings, so on the next screen you will see the permissions that you had specified in your json file.
  • Next, we have “Assignable Scopes”. You want to assign this at subscription level as assigning this custom role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly.
  • We can now skip to the “Review and Create” screen, as this will validate and list out our permissions for the RBAC role. Review these and then click “Create”:
  • And once thats created, we can see its been created as a Custom Role:

  • Now we need to add a Role Assignment for our RBAC Role. So we click on “Add role assignment”

  • We select our Custom RBAC role and in the members screen, we choose to assign access to a User, group or service principal. From the select members screen, search for “Windows Virtual Desktop”
  • Go to “Review and Assign” and click create:
  • And we can see that at subscription level the role has been assigned:

Create our Scaling Plan

Now that our RBAC role is done, we can create our scaling plan.

  • Open the Azure portal. In the search bar, type Azure Virtual Desktop and select the matching service entry. Select Scaling Plans, then select Create.
  • On the Basics screen, provide the following:
    • Subscription and Resource Group where the Scaling Plan will be created
    • Name
    • Location (remember this needs to be in the same region as your Host Pool)
    • Time Zone

The other entries are optional, however an important one to note is Exclusion Tags – you can use this in conjunction with Tags to excluse certain VMs from autoscaling operations

  • Click next and this will bring you to the Schedules screen. Click on Add Schedule
  • In the General screen, we enter a Schedule Name and also select the days we want the schedule to apply to.
  • In the Ramp-up screen, we specify a default starting point.
    • So in this instance, we want to have 20% (or 2 out of our 10 Session Hosts) powered on and ready to accept connections at 08:00.
    • We’ve selected “Breadth First” for Load balancing – this means users will be spread evenly across available hosts and is recommended for consistent performance.
    • Finally, we have set a Capacity threshold of 80%. If you recall, we set our hosts to accept a maximum of 10 connections. We have 2 hosts powered on, so once we reach 16 users across those 2 hosts, the next host will automatically power on.
  • Next up is Peak hours. For this we specify a starting time (which is normally when the majority of your users will be logging on) and we’ve also flipped the Load Balancing to “Depth-first”, which will load up all available hosts with user sessions (up to our 80% threshold) before bringing another one online. This is really up to you as to how you want to load balance, but as a reminder:
    • Breadth-first load balancing distributes new user sessions across all available session hosts in the host pool.
    • Depth-first load balancing distributes new sessions to any available session host with the highest number of connections that hasn’t reached its session limit yet.
  • Next up is Ramp-down, this is where we start deallocating hosts at the end of the working day and as you can see, the target is to get back down to 20% of the hosts. The important point to make here is the “Force logoff users” option. If this is enabled then the following applies:
    • This will choose the session host with the lowest number of user sessions to shut down. Autoscale will put the session host in drain mode, send all active user sessions a notification telling them they’ll be signed out, and then sign out all users after the specified wait time is over. After autoscale signs out all user sessions, it then deallocates the VM.
    • During ramp-down, autoscale will only shut down VMs if all existing user sessions in the host pool can be consolidated to fewer VMs without exceeding the capacity threshold.
  • Finally, we get to “Off-peak hours” which is the end of the “Ramp-down” period.
  • And thats our weekday schedule created. You can also go back in and create a weekend schedule where you can bring the number of hosts down to 10% and have a higher capacity threshold at weekends:
  • Once the schedules are created, we assign the Scaling Plan to our Host pool and click on “Enable autoscale”:
  • And now we can validate our options and click on “Review and create”:

Give all of this about an hour to kick in and you will see your Azure Virtual Desktop session hosts automatically deallocated as per your schedules if not in use!

Money money money ….

Earlier in this post, I gave a yearly figure of approx $35000 to run our 10 Session Host VMs. However, that figure is based on full consumption. So lets do some very quick calculations to see how our scaling plan affects that figure:

  • As we said, a single VM running at full consumption (or the full 730 hours) will cost us $290 per month.
  • Based on our schedules created above, we’re going to have 1 VM running full time for both weekdays and weekends. So thats $290 per month, or $3,480 per year.
  • We’re then guaranteed to have 1 VM running from Monday until Friday for 24 hours, and also on weekends for 12 hours each day (depending on how schedule is created). Thats effectively 6 days a week instead of 7. So we need to calculate that over a year which is a case of getting 6/7ths of our full price figure. Thats coming in at $2,983 per year for that VM.
  • Now, its back to the other 8 VMs and the 100 users who are using this. “If” those 100 users are logged on, the other 8 VMs will be up for 12 hours a day from Monday to Friday only as per our schedule. So for that, we need to get 5/7ths of our full price figure (which is $2,486) and then half it because we’re only using for 12 hours a day (and thats coming in at $1,243 per VM).

In summary, what we’ve got is:

  • $3,480 – 1 VM at full consumption
  • $2,983 – 1 VM at slightly reduced consumption for weekdays and weekends
  • $9,944 – 8 VMs running for 12 hours a day from Monday to Friday

Add those figures up and you get a total of $16,407. And we need to remember, that figure doesn’t available cost reductions like Reserved Instances or Hybrid Benefit.

Conclusion

So by implementing a Scaling Plan for the Host pool above, we’ve saved ourselves nearly $20,000. Again I’m going to stress the figures I’m quoting here are approximate, may not represent what you see in your own personal or enterprise subscriptions, and should not be taken as exact savings. Make sure to speak to your Microsoft TAM or Cloud Service Provider for more details. You can find out more about scaling plans here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 46: Azure Well Architected Framework

Its Day 46 of my 100 Days of Cloud Journey, and today I’m looking at Azure Well Architected Framework.

Over the course of my 100 Days journey so far, we’ve talked about and deployed multiple different types of Azure resources such as Virtual Machines, Network Security groups, VPNs, Firewalls etc.

We’ve seen how easy this is to do on a Dev-based PAYG subscription like I’m using, however for companies who wish to migrate to Azure, Microsoft provides a ‘Well Architected Framework’ which offers guidance in ensuring that any resource or solution that is deployed or architected in Azure conforms to best practices around planning, design, implementation and on-going maintenance and improvement of the solution.

The Well Architected Framework is based on 5 key pillars:

  • Reliability – this is the ability of a system to recover from failures and continue to function, which in itself is built around 2 key values:
    • Resiliency, which returns the application to a fully functional state after a failure.
    • Availability, which defines whether users can access the workload if they need to.
  • Security – protects applications and data from threats. The first thing people would think of here is “firewalls”, which would protects against threats and DDoS attacks but its not that simple. We need to build security into the application from the ground up. To do this, we can use the following areas:
    • Identity Management, such as RBAC roles and System Managed Identities.
    • Application Security, such as storing application secrets in Azure Key Vault.
    • Data sovereignty and encryption, which ensures the resource or workload and its underlying data is stored in the correct region and is encrypted using industry standards.
    • Security Resources, using tools such as Microsoft Dender for Cloud or Azure Firewall.
  • Cost Optimization – managing costs to maximize the value delivered. This can be achieved in the form of using tools such as:
    • Azure Cost Management to create budgets and cost alerts
    • Azure Migrate to assess the system load generated by your on-premise workloads to ensure thay are correctly sized in the cloud.
  • Operational Excellence – processes that keep a system running in production. In most cases, automated deployments leave little room for human error, and can not only be deployed quickly but can also be rolled back in the event of errors or failures.
  • Performance Efficiency – this is the ability of a system to adapt to changes in load. For this, we can think of tools and methodologioes such as auto-scaling, caching, data partitioning, network and storage optimization, and CDN resources in order to make sure your workloads run efficiently.

On top of all this, the Well Architected Framework has six supporting elements wrapped around it:

Diagram of the Well-Architected Framework and supporting elements.
Image Credit: Microsoft
  • Azure Well-Architected Review
  • Azure Advisor
  • Documentation
  • Partners, Support, and Services Offers
  • Reference Architectures
  • Design Principles

Azure Advisor in particular helps you follow best practises by analyzing your deployments and configuration and provides recommends solutions that can help you improve the reliability, security, cost effectiveness, performance, and operational excellence of your Azure resources. You can learn more about Azure Advisor here.

I recommend anyone who is either in the process of migration or planning to start on their Cloud Migration journey to review the Azure Well Architected Framework material to understand options and best practices when designing and developing an Azure solution. You can find the landing page for Well Architected Framework here, and the Assessments page to help on your journey is here!

Hope you all enjoyed this post, until next time!

100 Days of Cloud – Day 45: Azure Spot and Reserved Instances

Its Day 45 of my 100 Days of Cloud Journey, and today I’m looking at Azure Spot Instances and Reserved Instances.

During previous posts where I deployed virtual machines, the deployments were based on a Pay-As-You-Go pricing model, this is one of the 3 pricing models available to us in Azure. While this type of pricing is good for the likes of what I’m doing here (ie quickly spinning up VMs for a demo and then deleting them immediately), its not considered cost effective for organisations who have a Cloud Migration strategy, a long term plan to host a large number of VMs in Azure, and also need the flexibility to use low costs VMs for development or batch processing.

Lets take a look at the other 2 pricing models, starting with Azure Spot Instances.

Azure Spot Instances

Azure Spot instances allow you to utilize any unused Azure Capacity in your region at the fraction of the cost. However, at any point in time when Azure needs the capacity back, the Spot Instances will be evicted and removed from service at 30 seconds notice.

Because of this there is no SLA on Azure Spot instances, so they are not suitable for running production workloads. They are best suited for workloads that can handle interruptions, such as batch processing jobs, Dev/Test environments or Large compute workloads.

There is no availability guarantees, and availablity can vary based on size required, available capacity in your region, time of day etc. Azure will allocate the VM if there is available capacity, however there is no High Availability guarantees.

When the VMs are evicted, they can be either deallocated or deleted based on the policy you set when creating the VMs. Deallocate (this is the default) stops the VM and makes it available to redeploy (however this is not guaranteed and is based on capacity). You will also be charged for the underlying Storage Disk costs. Delete on the other hand will shut down and destroy the VMs and underlying storage.

You can see the full savings you can achieve by using Spot Instance VMs in the Azure Spot VMs Pricing Table here.

Azure Reserved Instances

Azure Reserved Instances is a way to reserve your compute capacity for a period of 1, 3 or 5 years at savings of over 70% when compared to Pay-As-You-Go pricing. This is best suited to Production workloads that need to have 24/7 runtime and high availability.

As we can see in the image from the Reservations blade in the Azure Portal above, you can purchase Azure Reserved Instances for a large number of Azure Resources, not just VMs.

Reservations can be aplied to a specific scope – that can be Subscription (single or multiple subscriptions), Resource Group or a single resource such as a VM, SQL Database or an App Service.

Once you click into any of the options on the Reservations blade, it will bring you into a list of available SKUs that you can purchase:

Another option to factor in is that Azure Reserved Instances can be use with Azure Hybrid Benefit, meaning you can use your on-premise Software Assurance-enabled Windows OS and SQL Server licences, which can bring your savings up to 80%! You can find out more about Azure Hybrid Benefit here, and get the full lowdown on Azure Reserved Instances here.

Conclusion

And thats a wrap on Azure Pricing models – you can see the cost savings you can make based on what your workloads are. Hope you enjoyed this post, until next time!

100 Days of Cloud — Day 2: Azure Budgets and Cost Management

One of the most common concerns raised when any organization is planning a move to the Cloud is Cost. Unlike Microsoft 365 where you have set costs based on license consumption, there are a number of variables to be considered when moving to any Cloud Provider (be that Azure, AWS or others).

For example, let’s say we want to put a Virtual Machine in the Cloud. Its sounds easy — if this was on-premise, you would provision storage on your SAN, assign CPU and Memory, assign an IP Address, and if required purchase a license for the OS and other additional software that will be running on the Virtual Machine.

All of the above still holds true when creating a Virtual Machine in the Cloud, but there are also other considerations, such as:

  • What Storage Tier will the VM run on (Standard HDD, Standard SSD, Premium SSD)
  • How HA do we need the VM to be (Locally Redundant, Geographically Redundant)
  • Does the VM need to be scalable based on demand/local (Auto Scaling/Scale Sets)

In an on-premise environment, there needs to be an up-front investment (CAPEX) to make that feasible. When running with a Cloud Provider such as Azure, this uses an on-demand model (OPEX). This is where costs can mount.

There are a number of ways to tackle this. The Azure TCO (Total Cost of Ownership) Calculator gives an estimate of costs of moving infrastructure to the cloud. The important word there is “estimate”.

So you’ve created your VM with all of the settings you need, and the TCO has given you the estimate for what total “should” be on your monthly invoice. Azure Cost Management and Budgets can provide you with forecasting and alerts with real-time analysis of your projected monthly spend. That way, there are no nasty surprises when the invoice arrives!

Firstly, lets create our Azure Account. Browse the Azure Portal to sign up. You get:

  • 12 months of free services
  • $200 credit for 30 days
  • 25 always free services

Azure Portal Method

When your account is set up, go to https://portal.azure.com to sign in:

Once you’ve signed in, you can search for “Cost Management and Billing”

From the “Cost Management + Billing” page, select “Cost Management” from the menu:

This brings us into the Cost Management Page for our Azure Subscription:

One important thing to note here before we go any further. We can see at the top of the screen that the “Scope” for the Cost Management is the Azure Subscription. In Azure, Budgets can be applied to the following:

  • Management Group — these allow you to manage multiple subscriptions
  • Subscriptions — Default
  • Resource Groups — Logical groups of related resources that are deployed together. These can be assigned to Departments or Geographical Locations

Also, we can create monthly, quarterly or annual budgets. For the purposes of this demo (and the entire 100 Days), I’ll be using Subscriptions with a monthly budget.

Click on the “Budgets” menu option, and then click “Add”:

This brings us into the “Create Budget” menu. Fill in the required details and set a Budget Amount — I’m going to set €50 as my monthly budget:

Next, we need to set up Alert Conditions and email recipients. In Alert Conditions, we can see from the “Type” field that we can choose either Actual or Forecasted:

  • Actual Alerts are generated when the monthly spend reaches the alert condition.
  • Forecasted Alerts are generated in advance when Azure calculates that you are likely to exceed the alert condition based on the services you are using

Once you have your Alert Conditions configured, add one or more Alert Recipients who will receive alerts based on your conditions. Then click “Create”:

And now we see our budget was created successfully!

So, that’s the Azure Portal way to do it. There are 2 other ways, the first is using Azure PowerShell.

Azure PowerShell Method

Firstly, we need open Windows PowerShell, and install the Azure Module. To do this, run:

install-module -name Az

This will install all packages and modules we require to manage Azure from PowerShell.

We can then run the following commands to create our Budget:

Connect-AzAccount

will prompt us to log on to our subscription:

Once we are logged in, this will return details of our Subscription:

Run

Get-AzContext

to check what level we are at in the subscription:

Now, we can run the following command to create a new budget:

New-AzConsumptionBudget -Amount 100 -Name TestPSBudget -Category Cost -StartDate 2021–09–17 -TimeGrain Monthly -EndDate 2023–09–17 -ContactEmail durkanm@gmail.com -NotificationKey Key1 -NotificationThreshold 0.8 -NotificationEnabled

But it throws an error! Why?

It turns out that after a bit of digging, you can only set a budget using PowerShell if your subscription is part of an Enterprise Agreement. So I’m afraid because I’m using a free account here, its not going to work ☹.

Full documentation can be found at this link:

https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-create-budgets#create-and-edit-budgets-with-powershell.

OK so lets move on to option 3, which is using Azure Resource Manager (ARM) Templates.

Azure Resource Manager (ARM) Templates Method

To do this, go to the following site:

https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/quick-create-budget-template?tabs=CLI

And click on the “Deploy to Azure” button:

This will re-direct us into the Azure Portal and allow us to fill in the fields required to create our Budget:

And that is how we create a Budget (3 ways) in Azure. See you on Day 3!!

Hope you enjoyed this post, until next time!!