100 Days of Cloud – Day 69: Azure Logic Apps

Its Day 69 of my 100 Days of Cloud journey, and today I’m getting my head around Azure Logic Apps.

Azure Logic Apps is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems.

Comparison with Azure Functions

If this sounds vaguely familiar, it should because all the way back on Day 55 we looked at Azure Functions, which by definition allows you to create serverless applications in Azure.

So they both do the same thing, right? Well yes and no. So at this stage it important to show what the differences are between them.

Lets start with Azure Functions:

  • Lets you run event-triggered code without having to explicitly provision or manage infrastructure.
  • Azure Functions have a “Code-First” (imperative) for user experience and are primarily authored via Visual Studio or another IDE.
  • Azure Functions have about a dozen built-in binding types (mainly for other Azure services). If there isn’t an existing binding, you will need to write custom code to create new bindings.
  • With Azure Functions, you have 3 pricing options. You can opt for an App Service Plan, which gives you dedicated resources. The second option is completely serverless, with the Consumption plan based on resources consumed and number of executions. The third option is Functions Premium, which is a hybrid of both the App Service Plan and Consumption Plan.
  • As Azure Functions are code-first, the only options for deployment are Visual Studio, Azure DevOps or FTP.

Now, lets compare that with Azure Logic Apps:

  • Azure Logic Apps is a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.
  • Logic Apps have a “Designer-First” (declarative) experience for the user by providing a visual workflow designer accessed via the Azure Portal.
  • Logic Apps have a large collection of connectors to Azure Services, SaaS applications (Microsoft and others), FTP and Enterprise Integration Pack for B2B scenarios; along with the ability to build custom connectors if one isn’t available. Examples of connectors are:
    • Azure services such as Blob Storage and Service Bus
    • Office 365 services such as Outlook, Excel, and SharePoint
    • Database servers such as SQL and Oracle
    • Enterprise systems such as SAP and IBM MQ
    • File shares such as FTP and SFTP
  • Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. However, there are different pricing tiers available, more information is available here.
  • There are many ways to manage and deploy Logic Apps. They can be created and updated directly in the Azure Portal (which will automatically create a JSON template). The JSON template can also be deployed via Azure DevOps and Visual Studio.

The following list describes just a few example tasks, business processes, and workloads that you can automate using the Azure Logic Apps service:

  • Schedule and send email notifications using Office 365 when a specific event happens, for example, a new file is uploaded.
  • Route and process customer orders across on-premises systems and cloud services.
  • Move uploaded files from an SFTP or FTP server to Azure Storage.
  • Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.


These are the key concepts to be aware of:

  • Logic app – A logic app is the Azure resource you create when you want to develop a workflow. There are multiple logic app resource types that run in different environments.
  • Workflow – A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.
  • Trigger – A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account.
  • Action – An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

How Logic Apps work

The workflow begins with a trigger, which can have a pull or push pattern. Pull triggers are initiated when a regularly scheduled process finds new updates in the source data since its last pull, while push triggers are initiated each time new data is generated in the source itself.

Next, users define a series of actions that run either consecutively or concurrently, based on the specified trigger and schedule. Users can export the worklow to JSON and use this to create and deploy Logic Apps using tools like Visual Studio and Azure DevOps, or they can save logic apps as Azure Resource Manager templates to reuse.


Connectors are the most powerful aspect of the structure of a Logic App. Connectors are blocks of pre-built operations that communicate with 3rd-party services as steps in the workflow. Connectors can be nested within each other to provide complex solutions that meet exact use case needs. 

Azure contains a catalog of hundreds of available connectors and users can leverage these connectors to accomplish tasks without requiring any coding experience. You can find the full list of connectors here.

Use Cases

The following are the common use cases for Logic Apps:

  • Send an email alert to users based on data being updated in an on-premises database.
  • Query a database and send email notifications based on result criteria.
  • Communication with external platforms and services.
  • Data transformation or ingestion.
  • Social media connectivity using built-in API connectors.
  • Timer- or content-based routing.
  • Create business-to-business (B2B) solutions.
  • Access Azure virtual network resources.

We saw in the templates above how we can use an Event Grid resource event as a trigger, the tutorial here gives an excellent run through of creating a Logic App based on an Event Grid resource event and using the O365 Email connector.


So thats an overview of Azure Logic Apps and also how it compares to Azure Functions. You can find out more about Azure Logic Apps in the official documentation here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 68: Azure Service Bus

Its Day 68 of my 100 Days of Cloud Journey, and today I’m looking at Azure Service Bus.

In the previous posts, we looked at the different services that Microsoft uses to handle events:

  • Azure Event Grid, which is an eventing backplane that enables event-driven, reactive programming. It uses the publish-subscribe model. Publishers emit events, but have no expectation about how the events are handled. Subscribers decide on which events they want to handle.
  • Azure Event Hubs, which is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. 

Events v Messages

Both of the above services are based on events, and its important to understand the definition of an event:

  • An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
  • Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn’t have the data that triggered the event.

By contrast, a message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.

Azure Service Bus

Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics (in a namespace). Service Bus is used to decouple applications and services from each other, whether hosted natively on Azure, on-premise, or from any other cloud vendor such as AWS or GCP. Messages are sent and kept into queues or topics until requested by consumers in a “poll” mode (i.e. only delivered when requested).

Azure Service Bus provides the following benefits:

  • Load-balancing work across competing workers
  • Safely routing and transferring data and control across service and application boundaries
  • Coordinating transactional work that requires a high-degree of reliability


  • Queues – Messages are sent to and received from queues. Queues store messages until the receiving application is available to receive and process them. Messages are kept in the queue until picked up by consumers, and are retrieved is on a first-in-first-out (FIFO) basis. A queue can have one or many competing consumers, but a message is consumed only once.
Image Credit: Microsoft
  • Topics: A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a publish and subscribe pattern. It’s useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions.
Image Credit: Microsoft

You can define rules on a subscription. A subscription rule has a filter to define a condition for the message to be copied into the subscription and an optional action that can modify message metadata.

  • Namespaces – A namespace is a container for all messaging components (queues and topics). Multiple queues and topics can be in a single namespace, and namespaces often serve as application containers.

There are also a number of advanced features available in Azure Service Bus:

  • Dead Letter Queue: this is a sub-queue to hold messages that could not be delivered or processed.
  • Consumption Mode: Azure Service Bus supports several consumption modes: pub/sub with a pull model, competing consumers, and partitioning can be achieved with the use of topics, subscriptions, and actions.
  • Duplicate Detection: Azure Service Bus supports duplicate detection natively.
  • Delivery Guarantee: Azure Service Bus supports three delivery guarantees: At-least-once, At-most-once, and Effectively once.
  • Message Ordering: Azure Service Bus can guarantee first-in-first-out using sessions.


Thats a brief overview of Azure Service Bus. You can learn more about Azure Service Bus in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 67: Azure Event Hubs

Its Day 67 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Hubs.

In the last post, we looked at Azure Event Grid, which is a serverless offering that allows you to easily build applications with event-based architectures. Azure Event Grid contains a number of sources, and one of those is Azure Event Hub.

Whereas Azure Event Grid can take in events from sources and trigger actions based on those events, Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.

One of the key difference between the 2 services is that while Event Grid can plug directly into Azure services and listen for events coming from their sources, Event Hubs can listen for events coming from sources outside of Azure, and can handle millions of events coming from multiple devices.

The following scenarios are some of the scenarios where you can use Event Hubs:

  • Anomaly detection (fraud/outliers)
  • Application logging
  • Analytics pipelines, such as clickstreams
  • Live dashboards
  • Archiving data
  • Transaction processing
  • User telemetry processing
  • Device telemetry streaming


Azure Event Hubs represent the “front door” (or Event Ingestor to give it the correct name) for an event pipeline, and sits between event producers and event consumers. It decouples the process of producing data from the process of consuming data. You can publish events individually or in batches.

Azure Event Hubs are built around the concept of partitions and consumer groups. Inside an Event Hub, events are sent to partitions by specifying the partition key or partition id. Partition count of an Event Hub cannot be changed after creation so is mindful of this limitation.

Image Credit: Microsoft

Receivers are grouped into consumer groups. A consumer group represents a view (state, position, or offset) of an entire event hub. It can be thought of as a set of parallel applications that consume events at the same time.

Consumer groups enable receivers to each have a separate view of the event stream. They read the stream independently at their own pace and with their own offsets. Event Hub uses a partitioned consumer pattern; events are spread across partitions to allow horizontal scale. Events can be stored in either Blob Storage or Data Lake, this is configured when the initial event hub is created.

Image Credit: Microsoft

Use Cases

Event Hubs is the component to use for real-time and/or streaming data use cases:

  • Real-time reporting
  • Capture streaming data into files for further processing and analysis – e.g. capturing data from micro-service applications or a mobile app
  • Make data available to stream-processing and analytics services – e.g. when scoring an AI algorithm
  • Telemetry streaming & processing
  • Application logging

Event Hubs is also available as a feature for Azure Stack Hub, which allows you to realize hybrid cloud scenarios. Streaming and event-based solutions are supported, for both on-premises and Azure cloud processing.


You can learn more about Event Hubs in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 66: Azure Event Grid

Its Day 66 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Grid, which I came across during my Az-204 studies.

Azure Event Grid is a serverless offering that allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to.

Image Credit: Microsoft


Azure Event Grid uses the following concepts which you will need to understand:

  • Events: An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. An example of common events would be a file being uploaded, a Virtual Machine deleted, a SKU being added etc.
  • Publishers: A publisher is the user or organization that decides to send events to Event Grid.
  • Event Sources: An event source is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. The following Azure services support sending events to Event Grid:
    • Azure API Management
    • Azure App Configuration
    • Azure App Service
    • Azure Blob Storage
    • Azure Cache for Redis
    • Azure Communication Services
    • Azure Container Registry
    • Azure Event Hubs
    • Azure FarmBeats
    • Azure IoT Hub
    • Azure Key Vault
    • Azure Kubernetes Service (preview)
    • Azure Machine Learning
    • Azure Maps
    • Azure Media Services
    • Azure Policy
    • Azure resource groups
    • Azure Service Bus
    • Azure SignalR
    • Azure subscriptions
  • Topics: The event grid topic provides an endpoint where the source sends events. A topic is used for a collection of related events.
  • Event Subscriptions:A subscription tells Event Grid which events on a topic you’re interested in receiving. When creating the subscription, you provide an endpoint for handling the event.
  • Event Handlers: From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. The supported event handlers are:
    • Webhooks. Azure Automation runbooks and Logic Apps are supported via webhooks.
    • Azure functions
    • Event hubs
    • Service Bus queues and topics
    • Relay hybrid connections
    • Storage queues


The key features of Azure Event Grid are:

  • Simplicity – You can direct events from any of the sources listed above to any event handler or endpoint.
  • Advanced filtering – Filter on event type to ensure event handlers only receive relevant events.
  • Fan-out – Subscribe several endpoints to the same event to send copies of the event to as many places as needed.
  • Reliability – 24-hour retry ensures that events are delivered.
  • Pay-per-event – Azure Event Grid uses a pay-per-event pricing model, so you only pay for what you use. The first 100,000 operations per month are free.
  • High throughput – Build high-volume workloads on Event Grid and scale up/down or in/out as required.
  • Built-in Events – Get up and running quickly with resource-defined built-in events.
  • Custom Events – Use Event Grid to route, filter, and reliably deliver custom events in your app.

Use Cases

Azure Event Grid provides several features that vastly improve serverless, ops automation, and integration work:

  • Serverless application architectures
Image Credit: Microsoft

Event Grid connects data sources and event handlers. For example, use Event Grid to trigger a serverless function that analyzes images when added to a blob storage container.

  • Ops Automation
Image Credit: Microsoft

Event Grid allows you to speed automation and simplify policy enforcement. For example, use Event Grid to notify Azure Automation when a virtual machine or database in Azure SQL is created. Use the events to automatically check that service configurations are compliant, put metadata into operations tools, tag virtual machines, or file work items.

  • Application integration
Image Credit: Microsoft

Event Grid connects your app with other services. For example, create a custom topic to send your app’s event data to Event Grid, and take advantage of its reliable delivery, advanced routing, and direct integration with Azure. Or, you can use Event Grid with Logic Apps to process data anywhere, without writing code.


You can learn more about Event Grid in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 65: AZ-204 Exam Day!

Its Day 65 of my 100 Days of Cloud Journey, and today I sat Exam AZ-204: Developing Solutons for Microsoft Azure.

This was using up my Exam Voucher that I earned for completing the Cloud Skill Challenge from Microsoft Ignite Fall 2021.

The reason I chose this exam was because it was a bit of a challenge – coming from a pure Infrastructure background, the idea of taking a Developer exam was something I never would have thought about a year ago. I’d heard of some of the concepts that were covered in the objectives, and the other exam options were either on the Cloud Infra or Security side, so I decided to go outside of my comfort zone and take on the challenge.

The skills measured list reads like this:

  • Develop Azure compute solutions (25-30%)
  • Develop for Azure storage (15-20%)
  • Implement Azure security (20-25%)
  • Monitor, troubleshoot, and optimize Azure solutions (15-20%)
  • Connect to and consume Azure services and third-party services (15-20%)

Looks like any other Infra-based exam we’ve seen before, right? Well, think again ….

Giving an NDA-friendly review, the exam goes deep into topics such as serverless compute, application and database development, authentication and monitoring. Again, all of this seems very infra focused, but the way I look at it is this – if you’ve done the Infra side of things like AZ-104 where you learn about how the infra works and how to put it together, AZ-204 approaches that from a far deeper dive, looking deep into the concepts and showing how we can programmatically manage these technologies.

And I’m delighted to say I made it out on the other side and passed the exam!

So does this make me an expert in Azure Development or turn me into a devloper? Well no, not overnight but its a big step in that direction. The thing with this exam is that even though I passed, I feel there is LOADS more content, demos and lab scenarios that I can look into – the exam itself will only scratch the surface, but not that I’ve gotten into this and started to go down the rabbit hole, its given me loads of ideas of how to use the content and technologies that I learned!

For learning paths on this one, I used the following:

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 64: Azure Cosmos DB

Its Day 64 of my 100 Days of Cloud journey, and today I’m looking at Azure Cosmos DB.

In the last post, we looked at Azure SQL and the different options we have available for hosting SQL Databases in Azure. SQL is an example of a Relational Database Management System (RDBMS), which follows a traditional model of storing data using 2-dimensional tables where data is stored in columns and rows in a pre-defined schema.

The opposite to this is non-relational databases, which use a storage model that is optimized for the specific requirements of the type of data being stored. Non-relational databases can have the following structures:

  • Document Data Stores, which stores data in JSON, XML, YAML or plain text format.
  • Columnar Data Stores, which stores data in column families which are logically related and manipulated as a unit.
  • Key/value Data Stores, which holds a data value that has a corresponding key.
  • Graph Databases, which are made up of nodes and edges to host data such as Organization Charts and Fraud detection.

All of the above options can be achieved by using Azure Cosmos DB.


Lets start with an overview – Azure Cosmos DB is a fully managed NoSQL database provides high availability, globally-distributed access to data with very low latency

If we log on to the Azure Portal and go to create an Azure Cosmos DB, we are given the options below:

The different API’s available are:

  • Core (SQL) API: Provides the flexibility of a NoSQL document store combined with the power of SQL for querying.
  • MongoDB API: Supports the MongoDB wire protocol so that existing MongoDB client continue to work with Azure Cosmos DB as if they are running against an actual MongoDB database.
  • Cassandra API: Supports the Cassandra wire protocol so that existing Apache drivers compliant with CQLv4 continue to work with Azure Cosmos DB as if they are running against an actual Cassandra database.
  • Gremlin API: Supports graph data with Apache TinkerPop (a graph computing framework) and the Gremlin query language.
  • Table API: Provides premium capabilities for applications written for Azure Table storage.

The key to picking an API is to select the one that best meets the needs for your database, but be warned: if you pick an API you cannot change it afterwards. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.

Once your API is selected, you get into the usual screens for creating resources in Azure:


Now this is where we need to talk about pricing – in SQL, we are familiar with licensing using Cores. This works the same way in Azure with the concept of vCores, but we also have the concept of Database Transaction Units (DTU’s) which is based on a bundled measure of compute, storage, and I/O resources.

In Azure Cosmos DB, usage is priced based on Request Units (RUs). You can think of RUs per second as the currency for throughput. As shown in the screenshot above, there are 2 pricing models available:

  • Provisioned throughput mode: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. You are billed on an hourly basis for the number of RUs per second you have provisioned.
  • Serverless mode: In this mode, you don’t have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.

We also have a 3rd option:

  • Autoscale mode: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload.

Each request to Azure Cosmos DB returns used RUs to you so you can decide whether stop your requests or increase the RU limit on the Azure portal.

Consistency Levels

The other important thing to note about Cosmos DB is Consistency Levels. Because Cosmos DB is a globally distributed database, you can set the level of consistency for replication across your global data centers. There are 5 levels to choose from:

  • Strong consistency is the strictest type of consistency available in CosmosDB. The data is synchronously replicated to all the replicas in real-time. This mode of consistency is useful for applications that cannot tolerate any data loss in case of downtime.
  • In the Bounded Staleness level, data is replicated asynchronously with a predetermined staleness window defined either by numbers of writes or a period of time. The reads query may lag behind by either a certain number of writes or by a pre-defined time period. However, the reads are guaranteed to honor the sequence of the data.
  • Session consistency is the default consistency that you get while configuring the cosmos DB account. This level of consistency honors the client session. It ensures a strong consistency for an application session with the same session token.
  • Consistent prefix model is similar to bounded staleness except, the operational or time lag guarantee. The replicas guarantee the consistency and order of the writes however the data is not always current. This model ensures that the user never sees an out-of-order write.
  • Eventual consistency is the weakest consistency level of all. The first thing to consider in this model is that there is no guarantee on the order of the data and also no guarantee of how long the data can take to replicate. As the name suggests, the reads are consistent, but eventually.

Use Cases

Any web, mobile, gaming, and IoT application that needs to handle massive amounts of data, reads, and writes at a global scale with near-real response times for a variety of data will benefit from Cosmos DB’s guaranteed high availability, high throughput, low latency, and tunable consistency. The Microsoft Docs article here describes the common use cases for Azure Cosmos DB.


And thats a look at the different options avaiable in Azure Cosmos DB. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 63: Azure SQL Server

Its Day 63 of my 100 Days of Cloud journey, and today I’m looking at SQL services in Azure and the different options we have for hosting SQL in Azure.

As we discussed in the previous post, SQL is an example of a Relational Database Management System (RDBMS), which follows a traditional model of storing data using 2-dimensional tables where data is stored in columns and rows in a pre-defined schema.

On-premise installations of Microsoft SQL Server would follow the traditional IAAS model, where we would install a Windows Server operating which provides the platform for the SQL Server Database to run on.

In Azure, we have 3 options for migrating and hosting our SQL Databases.

SQL Server on Azure VM

SQL Server on Azure VM is an IaaS offering and allows you to run SQL Server inside a fully managed virtual machine (VM) in Azure.

SQL virtual machines are a good option for migrating on-premises SQL Server databases and applications without any database change.

This option is best suited where OS-level access is required. SQL virtual machines in Azure are lift-and-shift ready for existing applications that require fast migration to the cloud with minimal changes or no changes. SQL virtual machines offer full administrative control over the SQL Server instance and underlying OS for migration to Azure.

SQL Server on Azure Virtual Machines allows full control over the database engine. You can choose when to start maintenance/patching, change the recovery model to simple or bulk-logged, pause or start the service when needed, and you can fully customize the SQL Server database engine. With this additional control comes the added responsibility to manage the virtual machine.

Azure SQL Managed Instance

Azure SQL Managed Instance is a Platform-as-a-Service (PaaS) offering, and is best for most migrations to the cloud. SQL Managed Instance is a collection of system and user databases with a shared set of resources that is lift-and-shift ready.

This option is best suited to new applications or existing on-premises applications that want to use the latest stable SQL Server features and that are migrated to the cloud with minimal changes. An instance of SQL Managed Instance is similar to an instance of the Microsoft SQL Server database engine offering shared resources for databases and additional instance-scoped features.

SQL Managed Instance supports database migration from on-premises with minimal to no database change. This option provides all of the PaaS benefits of Azure SQL Database but adds capabilities that were previously only available in SQL Server VMs. This includes a native virtual network and near 100% compatibility with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL Server access and feature compatibility for migrating SQL Servers to Azure.

Azure SQL Database

Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls into the category of a PaaS offering.

This is best for modern cloud applications that want to use the latest stable SQL Server features and have time constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server. SQL Database has two deployment options:

  • As a single database with its own set of resources managed via a logical SQL server. A single database is similar to a contained database in SQL Server. This option is optimized for modern application development of new cloud-born applications. Hyperscale and serverless options are available.
  • An elastic pool, which is a collection of databases with a shared set of resources managed via a logical SQL server. Single databases can be moved into and out of an elastic pool. This option is optimized for modern application development of new cloud-born applications using the multi-tenant SaaS application pattern. Elastic pools provide a cost-effective solution for managing the performance of multiple databases that have variable usage patterns.

Overall Comparisons

Both Azure SQL Database and Azure SQL Managed Instance are optimized to reduce overall management costs since you do not have to manage any virtual machines, operating system, or database software. You do not have to manage upgrades, high availability, or backups.

Both options can dramatically increase the number of databases managed by a single IT or development resource. Elastic pools also support SaaS multi-tenant application architectures with features including tenant isolation and the ability to scale to reduce costs by sharing resources across databases. SQL Managed Instance provides support for instance-scoped features enabling easy migration of existing applications, as well as sharing resources among databases.

Finally, the database software is automatically configured, patched, and upgraded by Azure, which reduces your administration

The alternative is SQL Server on Azure VMs which provides DBAs with an experience most similar to the on-premises environment they’re familiar with. You can use any of the platform-provided SQL Server images (which includes a license) or bring your SQL Server license. All the supported SQL Server versions (2008R2, 2012, 2014, 2016, 2017, 2019) and editions (Developer, Express, Web, Standard, Enterprise) are available. However, as this is a VM, it’s up to you to update/upgrade the operating system and database software and when to install any additional software such as anti-virus.


All of the above options can be managed from the Azure SQL page in the Azure Portal.

Image Credit -Microsoft


In order to migrate from existing SQL Workloads, in all cases you would use an Azure Migrate Project with the Data Migration Assistant. You can find all of the scenarios relating to migrations options here.


And thats a look at the different options for hosting SQL on Azure. Hope you enjoyed this post, until next time – I feel like going bowling now!

100 Days of Cloud – Day 62: Azure Database Solutions

Its Day 62 of my 100 Days of Cloud journey, and today I’m starting to look at the different Database Solutions available in Azure. “The Dude” to me to …..

The next 2 posts are going to cover the 2 main offerings – Azure SQL and Azure Cosmos. But first, we need to understand the different types of database that are available to us, how they store their data and the use cases where we would utilize the different database types.

Relational Databases

Lets kick off with Relational Database Management Systems, or RDBMS. These are the traditional model of storing data, and organises the data into 2-dimensional tables which have a series of rows and columns into which the data is stored.

RDBMS Databases follow a schema based model, where the data structure of the schema needs to be defined before any data is written. Any subsequent read or write operations must use the defined schema.

Vendors who use this model provide a version of Structured Query Language (SQL) for retrieving and managing the data. The most common examples of these would be Microsoft SQL, Oracle SQL or PostgreSQL.

RDBMS is useful when data consistency is required, however the downside is that RDBMS cannot easily scale out horizontally.

In Azure, the following RDBMS services are available:

  • Azure SQL Database – this is the full hosted version of SQL Server.
  • Azure Database for MySQL – open source relational database management system. MySQL uses standard SQL commands such as INSERT, DROP, ADD, and UPDATE, etc. The main purpose of MySQL is for e-commerce, data warehouse, and logging applications. Many database-driven websites use MySQL
  • Azure Database for PostgreSQL – this is a highly scalable RDBMS system which is cross-platform and can run on Linux, Windows and MacOS. PostgreSQL can perform complex queries, foreign keys, triggers, updatable views, and transactional integrity.
  • Azure Database for MariaDB – High performance OpenSource relational database based on MySQL. Dynamic columns allow a single DBMS to provide both SQL and NoSQL data handling for different needs. Supports encrypted tables, LDAP authentication and Kerberos.

The main use cases for RDBMS are:

  • Inventory management
  • Order management
  • Reporting database
  • Accounting

Non-Relational Databases

The opposite of relational databases are non-relational database, which is a database that does not use the tabular schema of rows and columns found in most traditional database systems. Instead, non-relational databases use a storage model that is optimized for the specific requirements of the type of data being stored. For example, data may be stored as simple key/value pairs, as JSON documents, or as a graph consisting of edges and vertices.

Because of the varying ways that data can be stored, there are LOADS of different types of non-relational databases.

Lets take a look at the different types of non-relational or NoSQL database.

  • Document Data Stores
Image Credit – Microsoft

A document data store manages a set of named string fields and object data values in an entity that’s referred to as a document. These are typically stored in JSON format, but can also be stored as XML, YAML, JSON, BSON, or even plain text. The fields within these documents are exposed to the storage management system, enabling an application to query and filter data by using the values in these fields. Typically, a document contains the entire data for an entity, and all documents are not required to have the same structure.

The application can retrieve documents by using the document key, which is hashed and is a unique identifier for the document.

From a service perspective, this would be delivered in Azure Cosmos DB.

Examples of use cases would be Product catalogs, Content management or Inventory management.

  • Columnar data stores
Image Credit – Microsoft

A columnar or column-family data store organizes data into columns and rows, which is very similar to a relational database. However, while a column-family database stores the data in tabular data with rows and columns, the columns are divided into groups known as column families. Each column family holds a set of columns that are logically related and are typically retrieved or manipulated as a unit. New columns can be added dynamically, and rows can be empty.

From a service perspective, this would be delivered in Azure Cosmos DB Cassandra API, which is used to store apps written for Apache Cassandra.

Examples of use cases would be Sensor data, Messaging, Social media and Web analytics, Activity monitoring, or Weather and other time-series data.

  • Key/value Data Stores
Image Credit – Microsoft

A key/value store associates each data value with a unique key. Most key/value stores only support simple query, insert, and delete operations. To modify a value (either partially or completely), an application must overwrite the existing data for the entire value. Key/value stores are highly optimized for applications performing simple lookups, but are less suitable if you need to query data across different key/value stores. Key/value stores are also not optimized for querying by value.

From a service perspective, this would be delivered in Azure Cosmos DB Table API or SQL API, Azure Cache for Redis, or Azure Table Storage.

Examples of use cases would be Data caching, Session management, or Product recommendations and ad serving.

  • Graph Databases
Image Credit – Microsoft

A graph database stores two types of information, nodes and edges. Edges specify relationships between nodes. Nodes and edges can have properties that provide information about that node or edge, similar to columns in a table. Edges can also have a direction indicating the nature of the relationship.

Graph databases can efficiently perform queries across the network of nodes and edges and analyze the relationships between entities.

From a service perspective, this would be delivered in Azure Cosmos DB Gremlin API.

Examples of use cases would be Organization charts, Social graphs, and Fraud detection.


And thats a whistle stop tour of the different types of databases available in Azure. There are other options such as Data Lake and Time Series, but I’ll leave those for future posts as they are bigger topics that deserve more attention.

Hope you enjoyed this post, until next time – I feel like going bowling now!

100 Days of Cloud – Day 61: Azure Monitor Metrics and Logs

Its Day 61 of my 100 Days of Cloud journey, and today I’m continuing to look at Azure Monitor, and am going to dig deeper into Azure Monitor Metrics and Azure Monitor Logs.

In our high level overview diagram, we saw that Metrics and Logs are the Raw Data that has been collected from the data sources.

Image Credit – Microsoft

Lets take a quick look at both options and what they are used for, as that will give us an insight into why we need both of them!

Azure Monitor Metrics

Azure Monitor Metrics collects data from monitored resources and stores the data in a time series database (for an OpenSource equivalent, think InfluxDB). Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.

Each set of metric values is a time series with the following properties:

  • The time that the value was collected.
  • The resource that the value is associated with.
  • A namespace that acts like a category for the metric.
  • A metric name.
  • The value itself.

Once our metrics are collected, there are a number of options we have for using them, including:

  • Analyze – Use Metrics Explorer to analyze collected metrics on a chart and compare metrics from various resources.
  • Alert – Configure a metric alert rule that sends a notification or takes automated action when the metric value crosses a threshold.
  • Visualize – Pin a chart from Metrics Explorer to an Azure dashboard, or export the results of a query to Grafana to use its dashboarding and combine with other data sources.
  • Automate – Increase or decrease resources based on a metric value crossing a threshold.
  • Export – Route metrics to logs to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.
  • Archive – Archive the performance or health history of your resource for compliance, auditing, or offline reporting purposes.

Azure Monitor can collect metrics from a number of sources:

  • Azure Resources – gives visibility into their health and performance over a period of time.
  • Applications – detect performance issues and track trends in how the application is being used.
  • Virtual Machine Agents – collect guest OS metrics from Windows or Linux VMs.
  • Custom Metrics can also be defined for an app thats monitored by Application Insights.

We can use Metrics Explorer to analyze the metric data and chart the values over time.

Image Credit – Microsoft

When it comes to retention,

  • Platform metrics are stored for 93 days.
  • Guest OS Metrics sent to Azure Monitor Metrics are stored for 93 days.
  • Guest OS Metrics collected by the Log Analytics agent are stored for 31 days, and can be extended up to 2 years.
  • Application Insight log-based metrics are variable and depend on the events in the underlying logs (31 days to 2 years).

You can find more details on Azure Monitor Metrics here.

Azure Monitor Logs

Azure Monitor Logs collects and organizes log and performance data from monitored resources. Log Data is stored in a structured format which can them be queried using a query language called Kusto Query Language (KQL).

Once our logs are collected, there are a number of options we have for using them, including:

  • Analyze – Use Log Analytics in the Azure portal to write log queries and interactively analyze log data by using a powerful analysis engine.
  • Alert – Configure a log alert rule that sends a notification or takes automated action when the results of the query match a particular result.
  • Visualize –
    • Pin query results rendered as tables or charts to an Azure dashboard.
    • Export the results of a query to Power BI to use different visualizations and share with users outside Azure.
    • Export the results of a query to Grafana to use its dashboarding and combine with other data sources.
  • Get insights – Logs support insights that provide a customized monitoring experience for particular applications and services.
  • Export – Configure automated export of log data to an Azure storage account or Azure Event Hubs, or build a workflow to retrieve log data and copy it to an external location by using Azure Logic Apps.

You need to create a Log Analytics Workspace in order to store the data. You can use Log Analytics Workspaces for Azure Monitor, but also to store data from other Azure services such as Sentinel or Defender for Cloud in the same workspace.

Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces.

Image Credit: Microsoft

You can the use Log Analytics to edit and run log queries and to anaylze the output. Log queries are the method of retrieving data from the Log Analytics Workspace, these are written in Kusto Query Language (KQL). You can write log queries in Log Analytics to interactively analyze their results, use them in alert rules to be proactively notified of issues, or include their results in workbooks or dashboards.

You can learn about KQL in more detail here, and find more details about Azure Monitor Logs here.


And thats a brief look at Azure Monitor Metric and Logs. We can see the differences between them, but how they can work together to build a powerful monitoring stack that can go right down to automating fixes for the alerts as they happen!

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 60: Azure Monitor

Its Day 60 of my 100 Days of Cloud journey, and todays post is all about Azure Monitor.

Azure Monitor is a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. The information collected by Azure Monitor helps you understand how your resources in both Azure, On-Premise (via Azure Arc) and Multi-Cloud (via Azure Arc) environments are performing, and proactively identify issues affecting them and the resources they depend on.


The following diagram gives a high-level view of Azure Monitor:

Image Credit – Microsoft

We can see on the left of the diagram the Data Sources that Azure Monitor will collect data from. Azure Monitor can collect data from the following:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

In the center, we then have Metrics and Logs. This is the raw data that has been collected:

  • Metrics are numerical values that describe some aspect of a system at a particular point in time. They are lightweight and capable of supporting near real-time scenarios.
  • Logs contain different kinds of data organized into records with different sets of properties for each type. Telemetry such as events and traces are stored as logs in addition to performance data so that it can all be combined for analysis.

Finally, on the right hand side we our insights, visualizations. Having all of that monitoring data is no use to us if we’re not doing anything with it. Azure Monitor allows us to create customized monitoring experiences for a particular service or set of services. Examples of this are:

  • Application Insights: Application Insights monitors the availability, performance, and usage of your web applications whether they’re hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application’s operations. It enables you to diagnose errors without waiting for a user to report them.
Application Insights – Image Credit: Microsoft
  • Container Insights: Container Insights monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Container Instances. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected.
Container Insights – Image Credit: Microsoft
  • VM Insights: VM Insights monitors your Azure virtual machines (VM) at scale. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes.
VM Insights – Image Credit: Microsoft

Responding to Situations

Dashboards are pretty and we can get pretty dashboards with any monitoring solution in the market. But what if we could so something more with the data than just showing it in a dashboard? Well we can!!

  • Alerts – Alerts in Azure Monitor proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near real time alerts based on numeric values. Rules based on logs allow for complex logic across data from multiple sources.
Image Credit: Microsoft
  • Autoscale – Autoscale allows you to have the right amount of resources running to handle the load on your application. Create rules that use metrics collected by Azure Monitor to determine when to automatically add resources when load increases. Save money by removing resources that are sitting idle. You specify a minimum and maximum number of instances and the logic for when to increase or decrease resources.
Image Credit: Microsoft
  • Dashboards – OK, so here’s the pretty dashboards! Azure dashboards allow you to combine different kinds of data into a single pane in the Azure portal. You can add the output of any log query or metrics chart to an Azure dashboard.
Image Credit: Microsoft
  • PowerBI – And here’s some even prettier dashboards! You can configure PowerBI to automatically import data from Azure Monitor and take advantage of the business analytics service to provide dashboards from a variety of sources.
Image Credit: Microsoft

External Integration

We can also integrate Azure Monitor with other systems to build custom solutions that use your monitoring data. Other Azure services work with Azure Monitor to provide this integration:

  • Azure Event Hubs is a streaming platform and event ingestion service. It can transform and store data using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream Azure Monitor data to partner SIEM and monitoring tools.
  • Logic Apps is a service that allows you to automate tasks and business processes using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor. This allows you to build workflows integrating with a variety of other systems.
  • Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. This provides you with essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.


And thats a brief overview of Azure Monitor, we can see how powerful a tool it can be to not just collect and monitor your event logs and metrics, but also to take actions based on limits that you set.

You can find more detailed information in the Microsoft Documentation here, and you can also find best practise guidance for monitoring in the Azure Architecture Center here. Hope you enjoyed this post, until next time!