100 Days of Cloud – Day 67: Azure Event Hubs

Its Day 67 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Hubs.

In the last post, we looked at Azure Event Grid, which is a serverless offering that allows you to easily build applications with event-based architectures. Azure Event Grid contains a number of sources, and one of those is Azure Event Hub.

Whereas Azure Event Grid can take in events from sources and trigger actions based on those events, Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.

One of the key difference between the 2 services is that while Event Grid can plug directly into Azure services and listen for events coming from their sources, Event Hubs can listen for events coming from sources outside of Azure, and can handle millions of events coming from multiple devices.

The following scenarios are some of the scenarios where you can use Event Hubs:

  • Anomaly detection (fraud/outliers)
  • Application logging
  • Analytics pipelines, such as clickstreams
  • Live dashboards
  • Archiving data
  • Transaction processing
  • User telemetry processing
  • Device telemetry streaming

Concepts

Azure Event Hubs represent the “front door” (or Event Ingestor to give it the correct name) for an event pipeline, and sits between event producers and event consumers. It decouples the process of producing data from the process of consuming data. You can publish events individually or in batches.

Azure Event Hubs are built around the concept of partitions and consumer groups. Inside an Event Hub, events are sent to partitions by specifying the partition key or partition id. Partition count of an Event Hub cannot be changed after creation so is mindful of this limitation.

Image Credit: Microsoft

Receivers are grouped into consumer groups. A consumer group represents a view (state, position, or offset) of an entire event hub. It can be thought of as a set of parallel applications that consume events at the same time.

Consumer groups enable receivers to each have a separate view of the event stream. They read the stream independently at their own pace and with their own offsets. Event Hub uses a partitioned consumer pattern; events are spread across partitions to allow horizontal scale. Events can be stored in either Blob Storage or Data Lake, this is configured when the initial event hub is created.

Image Credit: Microsoft

Use Cases

Event Hubs is the component to use for real-time and/or streaming data use cases:

  • Real-time reporting
  • Capture streaming data into files for further processing and analysis – e.g. capturing data from micro-service applications or a mobile app
  • Make data available to stream-processing and analytics services – e.g. when scoring an AI algorithm
  • Telemetry streaming & processing
  • Application logging

Event Hubs is also available as a feature for Azure Stack Hub, which allows you to realize hybrid cloud scenarios. Streaming and event-based solutions are supported, for both on-premises and Azure cloud processing.

Conclusion

You can learn more about Event Hubs in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 66: Azure Event Grid

Its Day 66 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Grid, which I came across during my Az-204 studies.

Azure Event Grid is a serverless offering that allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to.

Image Credit: Microsoft

Concepts

Azure Event Grid uses the following concepts which you will need to understand:

  • Events: An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. An example of common events would be a file being uploaded, a Virtual Machine deleted, a SKU being added etc.
  • Publishers: A publisher is the user or organization that decides to send events to Event Grid.
  • Event Sources: An event source is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. The following Azure services support sending events to Event Grid:
    • Azure API Management
    • Azure App Configuration
    • Azure App Service
    • Azure Blob Storage
    • Azure Cache for Redis
    • Azure Communication Services
    • Azure Container Registry
    • Azure Event Hubs
    • Azure FarmBeats
    • Azure IoT Hub
    • Azure Key Vault
    • Azure Kubernetes Service (preview)
    • Azure Machine Learning
    • Azure Maps
    • Azure Media Services
    • Azure Policy
    • Azure resource groups
    • Azure Service Bus
    • Azure SignalR
    • Azure subscriptions
  • Topics: The event grid topic provides an endpoint where the source sends events. A topic is used for a collection of related events.
  • Event Subscriptions:A subscription tells Event Grid which events on a topic you’re interested in receiving. When creating the subscription, you provide an endpoint for handling the event.
  • Event Handlers: From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. The supported event handlers are:
    • Webhooks. Azure Automation runbooks and Logic Apps are supported via webhooks.
    • Azure functions
    • Event hubs
    • Service Bus queues and topics
    • Relay hybrid connections
    • Storage queues

Capabilities

The key features of Azure Event Grid are:

  • Simplicity – You can direct events from any of the sources listed above to any event handler or endpoint.
  • Advanced filtering – Filter on event type to ensure event handlers only receive relevant events.
  • Fan-out – Subscribe several endpoints to the same event to send copies of the event to as many places as needed.
  • Reliability – 24-hour retry ensures that events are delivered.
  • Pay-per-event – Azure Event Grid uses a pay-per-event pricing model, so you only pay for what you use. The first 100,000 operations per month are free.
  • High throughput – Build high-volume workloads on Event Grid and scale up/down or in/out as required.
  • Built-in Events – Get up and running quickly with resource-defined built-in events.
  • Custom Events – Use Event Grid to route, filter, and reliably deliver custom events in your app.

Use Cases

Azure Event Grid provides several features that vastly improve serverless, ops automation, and integration work:

  • Serverless application architectures
Image Credit: Microsoft

Event Grid connects data sources and event handlers. For example, use Event Grid to trigger a serverless function that analyzes images when added to a blob storage container.

  • Ops Automation
Image Credit: Microsoft

Event Grid allows you to speed automation and simplify policy enforcement. For example, use Event Grid to notify Azure Automation when a virtual machine or database in Azure SQL is created. Use the events to automatically check that service configurations are compliant, put metadata into operations tools, tag virtual machines, or file work items.

  • Application integration
Image Credit: Microsoft

Event Grid connects your app with other services. For example, create a custom topic to send your app’s event data to Event Grid, and take advantage of its reliable delivery, advanced routing, and direct integration with Azure. Or, you can use Event Grid with Logic Apps to process data anywhere, without writing code.

Conclusion

You can learn more about Event Grid in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 65: AZ-204 Exam Day!

Its Day 65 of my 100 Days of Cloud Journey, and today I sat Exam AZ-204: Developing Solutons for Microsoft Azure.

This was using up my Exam Voucher that I earned for completing the Cloud Skill Challenge from Microsoft Ignite Fall 2021.

The reason I chose this exam was because it was a bit of a challenge – coming from a pure Infrastructure background, the idea of taking a Developer exam was something I never would have thought about a year ago. I’d heard of some of the concepts that were covered in the objectives, and the other exam options were either on the Cloud Infra or Security side, so I decided to go outside of my comfort zone and take on the challenge.

The skills measured list reads like this:

  • Develop Azure compute solutions (25-30%)
  • Develop for Azure storage (15-20%)
  • Implement Azure security (20-25%)
  • Monitor, troubleshoot, and optimize Azure solutions (15-20%)
  • Connect to and consume Azure services and third-party services (15-20%)

Looks like any other Infra-based exam we’ve seen before, right? Well, think again ….

Giving an NDA-friendly review, the exam goes deep into topics such as serverless compute, application and database development, authentication and monitoring. Again, all of this seems very infra focused, but the way I look at it is this – if you’ve done the Infra side of things like AZ-104 where you learn about how the infra works and how to put it together, AZ-204 approaches that from a far deeper dive, looking deep into the concepts and showing how we can programmatically manage these technologies.

And I’m delighted to say I made it out on the other side and passed the exam!

So does this make me an expert in Azure Development or turn me into a devloper? Well no, not overnight but its a big step in that direction. The thing with this exam is that even though I passed, I feel there is LOADS more content, demos and lab scenarios that I can look into – the exam itself will only scratch the surface, but not that I’ve gotten into this and started to go down the rabbit hole, its given me loads of ideas of how to use the content and technologies that I learned!

For learning paths on this one, I used the following:

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 64: Azure Cosmos DB

Its Day 64 of my 100 Days of Cloud journey, and today I’m looking at Azure Cosmos DB.

In the last post, we looked at Azure SQL and the different options we have available for hosting SQL Databases in Azure. SQL is an example of a Relational Database Management System (RDBMS), which follows a traditional model of storing data using 2-dimensional tables where data is stored in columns and rows in a pre-defined schema.

The opposite to this is non-relational databases, which use a storage model that is optimized for the specific requirements of the type of data being stored. Non-relational databases can have the following structures:

  • Document Data Stores, which stores data in JSON, XML, YAML or plain text format.
  • Columnar Data Stores, which stores data in column families which are logically related and manipulated as a unit.
  • Key/value Data Stores, which holds a data value that has a corresponding key.
  • Graph Databases, which are made up of nodes and edges to host data such as Organization Charts and Fraud detection.

All of the above options can be achieved by using Azure Cosmos DB.

Overview

Lets start with an overview – Azure Cosmos DB is a fully managed NoSQL database provides high availability, globally-distributed access to data with very low latency

If we log on to the Azure Portal and go to create an Azure Cosmos DB, we are given the options below:

The different API’s available are:

  • Core (SQL) API: Provides the flexibility of a NoSQL document store combined with the power of SQL for querying.
  • MongoDB API: Supports the MongoDB wire protocol so that existing MongoDB client continue to work with Azure Cosmos DB as if they are running against an actual MongoDB database.
  • Cassandra API: Supports the Cassandra wire protocol so that existing Apache drivers compliant with CQLv4 continue to work with Azure Cosmos DB as if they are running against an actual Cassandra database.
  • Gremlin API: Supports graph data with Apache TinkerPop (a graph computing framework) and the Gremlin query language.
  • Table API: Provides premium capabilities for applications written for Azure Table storage.

The key to picking an API is to select the one that best meets the needs for your database, but be warned: if you pick an API you cannot change it afterwards. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.

Once your API is selected, you get into the usual screens for creating resources in Azure:

Pricing

Now this is where we need to talk about pricing – in SQL, we are familiar with licensing using Cores. This works the same way in Azure with the concept of vCores, but we also have the concept of Database Transaction Units (DTU’s) which is based on a bundled measure of compute, storage, and I/O resources.

In Azure Cosmos DB, usage is priced based on Request Units (RUs). You can think of RUs per second as the currency for throughput. As shown in the screenshot above, there are 2 pricing models available:

  • Provisioned throughput mode: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. You are billed on an hourly basis for the number of RUs per second you have provisioned.
  • Serverless mode: In this mode, you don’t have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.

We also have a 3rd option:

  • Autoscale mode: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload.

Each request to Azure Cosmos DB returns used RUs to you so you can decide whether stop your requests or increase the RU limit on the Azure portal.

Consistency Levels

The other important thing to note about Cosmos DB is Consistency Levels. Because Cosmos DB is a globally distributed database, you can set the level of consistency for replication across your global data centers. There are 5 levels to choose from:

  • Strong consistency is the strictest type of consistency available in CosmosDB. The data is synchronously replicated to all the replicas in real-time. This mode of consistency is useful for applications that cannot tolerate any data loss in case of downtime.
  • In the Bounded Staleness level, data is replicated asynchronously with a predetermined staleness window defined either by numbers of writes or a period of time. The reads query may lag behind by either a certain number of writes or by a pre-defined time period. However, the reads are guaranteed to honor the sequence of the data.
  • Session consistency is the default consistency that you get while configuring the cosmos DB account. This level of consistency honors the client session. It ensures a strong consistency for an application session with the same session token.
  • Consistent prefix model is similar to bounded staleness except, the operational or time lag guarantee. The replicas guarantee the consistency and order of the writes however the data is not always current. This model ensures that the user never sees an out-of-order write.
  • Eventual consistency is the weakest consistency level of all. The first thing to consider in this model is that there is no guarantee on the order of the data and also no guarantee of how long the data can take to replicate. As the name suggests, the reads are consistent, but eventually.

Use Cases

Any web, mobile, gaming, and IoT application that needs to handle massive amounts of data, reads, and writes at a global scale with near-real response times for a variety of data will benefit from Cosmos DB’s guaranteed high availability, high throughput, low latency, and tunable consistency. The Microsoft Docs article here describes the common use cases for Azure Cosmos DB.

Conclusion

And thats a look at the different options avaiable in Azure Cosmos DB. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 63: Azure SQL Server

Its Day 63 of my 100 Days of Cloud journey, and today I’m looking at SQL services in Azure and the different options we have for hosting SQL in Azure.

As we discussed in the previous post, SQL is an example of a Relational Database Management System (RDBMS), which follows a traditional model of storing data using 2-dimensional tables where data is stored in columns and rows in a pre-defined schema.

On-premise installations of Microsoft SQL Server would follow the traditional IAAS model, where we would install a Windows Server operating which provides the platform for the SQL Server Database to run on.

In Azure, we have 3 options for migrating and hosting our SQL Databases.

SQL Server on Azure VM

SQL Server on Azure VM is an IaaS offering and allows you to run SQL Server inside a fully managed virtual machine (VM) in Azure.

SQL virtual machines are a good option for migrating on-premises SQL Server databases and applications without any database change.

This option is best suited where OS-level access is required. SQL virtual machines in Azure are lift-and-shift ready for existing applications that require fast migration to the cloud with minimal changes or no changes. SQL virtual machines offer full administrative control over the SQL Server instance and underlying OS for migration to Azure.

SQL Server on Azure Virtual Machines allows full control over the database engine. You can choose when to start maintenance/patching, change the recovery model to simple or bulk-logged, pause or start the service when needed, and you can fully customize the SQL Server database engine. With this additional control comes the added responsibility to manage the virtual machine.

Azure SQL Managed Instance

Azure SQL Managed Instance is a Platform-as-a-Service (PaaS) offering, and is best for most migrations to the cloud. SQL Managed Instance is a collection of system and user databases with a shared set of resources that is lift-and-shift ready.

This option is best suited to new applications or existing on-premises applications that want to use the latest stable SQL Server features and that are migrated to the cloud with minimal changes. An instance of SQL Managed Instance is similar to an instance of the Microsoft SQL Server database engine offering shared resources for databases and additional instance-scoped features.

SQL Managed Instance supports database migration from on-premises with minimal to no database change. This option provides all of the PaaS benefits of Azure SQL Database but adds capabilities that were previously only available in SQL Server VMs. This includes a native virtual network and near 100% compatibility with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL Server access and feature compatibility for migrating SQL Servers to Azure.

Azure SQL Database

Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls into the category of a PaaS offering.

This is best for modern cloud applications that want to use the latest stable SQL Server features and have time constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server. SQL Database has two deployment options:

  • As a single database with its own set of resources managed via a logical SQL server. A single database is similar to a contained database in SQL Server. This option is optimized for modern application development of new cloud-born applications. Hyperscale and serverless options are available.
  • An elastic pool, which is a collection of databases with a shared set of resources managed via a logical SQL server. Single databases can be moved into and out of an elastic pool. This option is optimized for modern application development of new cloud-born applications using the multi-tenant SaaS application pattern. Elastic pools provide a cost-effective solution for managing the performance of multiple databases that have variable usage patterns.

Overall Comparisons

Both Azure SQL Database and Azure SQL Managed Instance are optimized to reduce overall management costs since you do not have to manage any virtual machines, operating system, or database software. You do not have to manage upgrades, high availability, or backups.

Both options can dramatically increase the number of databases managed by a single IT or development resource. Elastic pools also support SaaS multi-tenant application architectures with features including tenant isolation and the ability to scale to reduce costs by sharing resources across databases. SQL Managed Instance provides support for instance-scoped features enabling easy migration of existing applications, as well as sharing resources among databases.

Finally, the database software is automatically configured, patched, and upgraded by Azure, which reduces your administration

The alternative is SQL Server on Azure VMs which provides DBAs with an experience most similar to the on-premises environment they’re familiar with. You can use any of the platform-provided SQL Server images (which includes a license) or bring your SQL Server license. All the supported SQL Server versions (2008R2, 2012, 2014, 2016, 2017, 2019) and editions (Developer, Express, Web, Standard, Enterprise) are available. However, as this is a VM, it’s up to you to update/upgrade the operating system and database software and when to install any additional software such as anti-virus.

Management

All of the above options can be managed from the Azure SQL page in the Azure Portal.

Image Credit -Microsoft

Migration

In order to migrate from existing SQL Workloads, in all cases you would use an Azure Migrate Project with the Data Migration Assistant. You can find all of the scenarios relating to migrations options here.

Conclusion

And thats a look at the different options for hosting SQL on Azure. Hope you enjoyed this post, until next time – I feel like going bowling now!

100 Days of Cloud – Day 62: Azure Database Solutions

Its Day 62 of my 100 Days of Cloud journey, and today I’m starting to look at the different Database Solutions available in Azure. “The Dude” to me to …..

The next 2 posts are going to cover the 2 main offerings – Azure SQL and Azure Cosmos. But first, we need to understand the different types of database that are available to us, how they store their data and the use cases where we would utilize the different database types.

Relational Databases

Lets kick off with Relational Database Management Systems, or RDBMS. These are the traditional model of storing data, and organises the data into 2-dimensional tables which have a series of rows and columns into which the data is stored.

RDBMS Databases follow a schema based model, where the data structure of the schema needs to be defined before any data is written. Any subsequent read or write operations must use the defined schema.

Vendors who use this model provide a version of Structured Query Language (SQL) for retrieving and managing the data. The most common examples of these would be Microsoft SQL, Oracle SQL or PostgreSQL.

RDBMS is useful when data consistency is required, however the downside is that RDBMS cannot easily scale out horizontally.

In Azure, the following RDBMS services are available:

  • Azure SQL Database – this is the full hosted version of SQL Server.
  • Azure Database for MySQL – open source relational database management system. MySQL uses standard SQL commands such as INSERT, DROP, ADD, and UPDATE, etc. The main purpose of MySQL is for e-commerce, data warehouse, and logging applications. Many database-driven websites use MySQL
  • Azure Database for PostgreSQL – this is a highly scalable RDBMS system which is cross-platform and can run on Linux, Windows and MacOS. PostgreSQL can perform complex queries, foreign keys, triggers, updatable views, and transactional integrity.
  • Azure Database for MariaDB – High performance OpenSource relational database based on MySQL. Dynamic columns allow a single DBMS to provide both SQL and NoSQL data handling for different needs. Supports encrypted tables, LDAP authentication and Kerberos.

The main use cases for RDBMS are:

  • Inventory management
  • Order management
  • Reporting database
  • Accounting

Non-Relational Databases

The opposite of relational databases are non-relational database, which is a database that does not use the tabular schema of rows and columns found in most traditional database systems. Instead, non-relational databases use a storage model that is optimized for the specific requirements of the type of data being stored. For example, data may be stored as simple key/value pairs, as JSON documents, or as a graph consisting of edges and vertices.

Because of the varying ways that data can be stored, there are LOADS of different types of non-relational databases.

Lets take a look at the different types of non-relational or NoSQL database.

  • Document Data Stores
Image Credit – Microsoft

A document data store manages a set of named string fields and object data values in an entity that’s referred to as a document. These are typically stored in JSON format, but can also be stored as XML, YAML, JSON, BSON, or even plain text. The fields within these documents are exposed to the storage management system, enabling an application to query and filter data by using the values in these fields. Typically, a document contains the entire data for an entity, and all documents are not required to have the same structure.

The application can retrieve documents by using the document key, which is hashed and is a unique identifier for the document.

From a service perspective, this would be delivered in Azure Cosmos DB.

Examples of use cases would be Product catalogs, Content management or Inventory management.

  • Columnar data stores
Image Credit – Microsoft

A columnar or column-family data store organizes data into columns and rows, which is very similar to a relational database. However, while a column-family database stores the data in tabular data with rows and columns, the columns are divided into groups known as column families. Each column family holds a set of columns that are logically related and are typically retrieved or manipulated as a unit. New columns can be added dynamically, and rows can be empty.

From a service perspective, this would be delivered in Azure Cosmos DB Cassandra API, which is used to store apps written for Apache Cassandra.

Examples of use cases would be Sensor data, Messaging, Social media and Web analytics, Activity monitoring, or Weather and other time-series data.

  • Key/value Data Stores
Image Credit – Microsoft

A key/value store associates each data value with a unique key. Most key/value stores only support simple query, insert, and delete operations. To modify a value (either partially or completely), an application must overwrite the existing data for the entire value. Key/value stores are highly optimized for applications performing simple lookups, but are less suitable if you need to query data across different key/value stores. Key/value stores are also not optimized for querying by value.

From a service perspective, this would be delivered in Azure Cosmos DB Table API or SQL API, Azure Cache for Redis, or Azure Table Storage.

Examples of use cases would be Data caching, Session management, or Product recommendations and ad serving.

  • Graph Databases
Image Credit – Microsoft

A graph database stores two types of information, nodes and edges. Edges specify relationships between nodes. Nodes and edges can have properties that provide information about that node or edge, similar to columns in a table. Edges can also have a direction indicating the nature of the relationship.

Graph databases can efficiently perform queries across the network of nodes and edges and analyze the relationships between entities.

From a service perspective, this would be delivered in Azure Cosmos DB Gremlin API.

Examples of use cases would be Organization charts, Social graphs, and Fraud detection.

Conclusion

And thats a whistle stop tour of the different types of databases available in Azure. There are other options such as Data Lake and Time Series, but I’ll leave those for future posts as they are bigger topics that deserve more attention.

Hope you enjoyed this post, until next time – I feel like going bowling now!

100 Days of Cloud – Day 61: Azure Monitor Metrics and Logs

Its Day 61 of my 100 Days of Cloud journey, and today I’m continuing to look at Azure Monitor, and am going to dig deeper into Azure Monitor Metrics and Azure Monitor Logs.

In our high level overview diagram, we saw that Metrics and Logs are the Raw Data that has been collected from the data sources.

Image Credit – Microsoft

Lets take a quick look at both options and what they are used for, as that will give us an insight into why we need both of them!

Azure Monitor Metrics

Azure Monitor Metrics collects data from monitored resources and stores the data in a time series database (for an OpenSource equivalent, think InfluxDB). Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.

Each set of metric values is a time series with the following properties:

  • The time that the value was collected.
  • The resource that the value is associated with.
  • A namespace that acts like a category for the metric.
  • A metric name.
  • The value itself.

Once our metrics are collected, there are a number of options we have for using them, including:

  • Analyze – Use Metrics Explorer to analyze collected metrics on a chart and compare metrics from various resources.
  • Alert – Configure a metric alert rule that sends a notification or takes automated action when the metric value crosses a threshold.
  • Visualize – Pin a chart from Metrics Explorer to an Azure dashboard, or export the results of a query to Grafana to use its dashboarding and combine with other data sources.
  • Automate – Increase or decrease resources based on a metric value crossing a threshold.
  • Export – Route metrics to logs to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.
  • Archive – Archive the performance or health history of your resource for compliance, auditing, or offline reporting purposes.

Azure Monitor can collect metrics from a number of sources:

  • Azure Resources – gives visibility into their health and performance over a period of time.
  • Applications – detect performance issues and track trends in how the application is being used.
  • Virtual Machine Agents – collect guest OS metrics from Windows or Linux VMs.
  • Custom Metrics can also be defined for an app thats monitored by Application Insights.

We can use Metrics Explorer to analyze the metric data and chart the values over time.

Image Credit – Microsoft

When it comes to retention,

  • Platform metrics are stored for 93 days.
  • Guest OS Metrics sent to Azure Monitor Metrics are stored for 93 days.
  • Guest OS Metrics collected by the Log Analytics agent are stored for 31 days, and can be extended up to 2 years.
  • Application Insight log-based metrics are variable and depend on the events in the underlying logs (31 days to 2 years).

You can find more details on Azure Monitor Metrics here.

Azure Monitor Logs

Azure Monitor Logs collects and organizes log and performance data from monitored resources. Log Data is stored in a structured format which can them be queried using a query language called Kusto Query Language (KQL).

Once our logs are collected, there are a number of options we have for using them, including:

  • Analyze – Use Log Analytics in the Azure portal to write log queries and interactively analyze log data by using a powerful analysis engine.
  • Alert – Configure a log alert rule that sends a notification or takes automated action when the results of the query match a particular result.
  • Visualize –
    • Pin query results rendered as tables or charts to an Azure dashboard.
    • Export the results of a query to Power BI to use different visualizations and share with users outside Azure.
    • Export the results of a query to Grafana to use its dashboarding and combine with other data sources.
  • Get insights – Logs support insights that provide a customized monitoring experience for particular applications and services.
  • Export – Configure automated export of log data to an Azure storage account or Azure Event Hubs, or build a workflow to retrieve log data and copy it to an external location by using Azure Logic Apps.

You need to create a Log Analytics Workspace in order to store the data. You can use Log Analytics Workspaces for Azure Monitor, but also to store data from other Azure services such as Sentinel or Defender for Cloud in the same workspace.

Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces.

Image Credit: Microsoft

You can the use Log Analytics to edit and run log queries and to anaylze the output. Log queries are the method of retrieving data from the Log Analytics Workspace, these are written in Kusto Query Language (KQL). You can write log queries in Log Analytics to interactively analyze their results, use them in alert rules to be proactively notified of issues, or include their results in workbooks or dashboards.

You can learn about KQL in more detail here, and find more details about Azure Monitor Logs here.

Conclusion

And thats a brief look at Azure Monitor Metric and Logs. We can see the differences between them, but how they can work together to build a powerful monitoring stack that can go right down to automating fixes for the alerts as they happen!

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 60: Azure Monitor

Its Day 60 of my 100 Days of Cloud journey, and todays post is all about Azure Monitor.

Azure Monitor is a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. The information collected by Azure Monitor helps you understand how your resources in both Azure, On-Premise (via Azure Arc) and Multi-Cloud (via Azure Arc) environments are performing, and proactively identify issues affecting them and the resources they depend on.

Overview

The following diagram gives a high-level view of Azure Monitor:

Image Credit – Microsoft

We can see on the left of the diagram the Data Sources that Azure Monitor will collect data from. Azure Monitor can collect data from the following:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

In the center, we then have Metrics and Logs. This is the raw data that has been collected:

  • Metrics are numerical values that describe some aspect of a system at a particular point in time. They are lightweight and capable of supporting near real-time scenarios.
  • Logs contain different kinds of data organized into records with different sets of properties for each type. Telemetry such as events and traces are stored as logs in addition to performance data so that it can all be combined for analysis.

Finally, on the right hand side we our insights, visualizations. Having all of that monitoring data is no use to us if we’re not doing anything with it. Azure Monitor allows us to create customized monitoring experiences for a particular service or set of services. Examples of this are:

  • Application Insights: Application Insights monitors the availability, performance, and usage of your web applications whether they’re hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application’s operations. It enables you to diagnose errors without waiting for a user to report them.
Application Insights – Image Credit: Microsoft
  • Container Insights: Container Insights monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Container Instances. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected.
Container Insights – Image Credit: Microsoft
  • VM Insights: VM Insights monitors your Azure virtual machines (VM) at scale. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes.
VM Insights – Image Credit: Microsoft

Responding to Situations

Dashboards are pretty and we can get pretty dashboards with any monitoring solution in the market. But what if we could so something more with the data than just showing it in a dashboard? Well we can!!

  • Alerts – Alerts in Azure Monitor proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near real time alerts based on numeric values. Rules based on logs allow for complex logic across data from multiple sources.
Image Credit: Microsoft
  • Autoscale – Autoscale allows you to have the right amount of resources running to handle the load on your application. Create rules that use metrics collected by Azure Monitor to determine when to automatically add resources when load increases. Save money by removing resources that are sitting idle. You specify a minimum and maximum number of instances and the logic for when to increase or decrease resources.
Image Credit: Microsoft
  • Dashboards – OK, so here’s the pretty dashboards! Azure dashboards allow you to combine different kinds of data into a single pane in the Azure portal. You can add the output of any log query or metrics chart to an Azure dashboard.
Image Credit: Microsoft
  • PowerBI – And here’s some even prettier dashboards! You can configure PowerBI to automatically import data from Azure Monitor and take advantage of the business analytics service to provide dashboards from a variety of sources.
Image Credit: Microsoft

External Integration

We can also integrate Azure Monitor with other systems to build custom solutions that use your monitoring data. Other Azure services work with Azure Monitor to provide this integration:

  • Azure Event Hubs is a streaming platform and event ingestion service. It can transform and store data using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream Azure Monitor data to partner SIEM and monitoring tools.
  • Logic Apps is a service that allows you to automate tasks and business processes using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor. This allows you to build workflows integrating with a variety of other systems.
  • Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. This provides you with essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.

Conclusion

And thats a brief overview of Azure Monitor, we can see how powerful a tool it can be to not just collect and monitor your event logs and metrics, but also to take actions based on limits that you set.

You can find more detailed information in the Microsoft Documentation here, and you can also find best practise guidance for monitoring in the Azure Architecture Center here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 59: Azure Lighthouse

Its Day 59 of my 100 Days of Cloud journey, and todays post is all about Azure Lighthouse.

No, its not that sort of Lighthouse…..

Azure Lighthouse enabled centalized management of multiple tenants, whcih can be utilized by:

  • Service Providers who wish to manage their Customer tenants from their own Tenant.
  • Enterprise Organisations with multiple tenants who wish to manage these from a single tenancy.

In each of the above examples, the customer in the underlying tenant maintains control over who has access to their tenant, which resources they can access, and what levels of access they have.

Benefits

The main benefit of Azure Lighthouse is to Service Providers, as it helps them to efficiently build and deliver managed services. Benefits include:

  • Management at scale: Customer engagement and life-cycle operations to manage customer resources are easier and more scalable. Existing APIs, management tools, and workflows can be used with delegated resources, including machines hosted outside of Azure, regardless of the regions in which they’re located.
  • Greater visibility and control for customers: Customers have precise control over the scopes they delegate for management and the permissions that are allowed. They can audit service provider actions and remove access completely at any time.
  • Comprehensive and unified platform tooling: Azure Lighthouse works with existing tools and APIs, Azure managed applications, and partner programs like the Cloud Solution Provider program (CSP). This flexibility supports key service provider scenarios, including multiple licensing models such as EA, CSP and pay-as-you-go. You can integrate Azure Lighthouse into your existing workflows and applications, and track your impact on customer engagements by linking your partner ID.
  • Work more efficiently with Azure services like Azure Policy, Microsoft Sentinel, Azure Arc, and many more. Users can see what changes were made and by whom in the activity log, which is stored in the customer’s tenant and can be viewed by users in the managing tenant.
  • Azure Lighthouse is non-regional, which means you can manage tenants for multiple customers across multiple regions separately.
Image Credit: Microsoft

Visibility

  • Service Providers can manage customers’ Azure resources securely from within their own tenant, without having to switch context and control planes. Service providers can view cross-tenant information in the “My Customers” page in the Azure portal.
  • Customer subscriptions and resource groups can be delegated to specified users and roles in the managing tenant, with the ability to remove access as needed.
    The “Service Providers” page lets customers view and manage their service provider access.

Onboarding

When a customer’s subscription or resource group is onboarded to Azure Lighthouse, two resources are created: 

  • Registration definition – The registration definition contains the details of the Azure Lighthouse offer (the managing tenant ID and the authorizations that assign built-in roles to specific users, groups, and/or service principals in the managing tenant. A registration definition is created at the subscription level for each delegated subscription, or in each subscription that contains a delegated resource group.
  • Registration Assignment – The registration assignment assigns the registration definition to the onboarded subscription(s) and/or resource group(s). A registration assignment is created in each delegated scope. Each registration assignment must reference a valid registration definition at the subscription level, tying the authorizations for that service provider to the delegated scope and thus granting access.

Once this happens, Azure Lighthouse creates a logical projection of resources from one tenant onto another tenant. This lets authorized service provider users sign in to their own tenant with authorization to work in delegated customer subscriptions and resource groups. Users in the service provider’s tenant can then perform management operations on behalf of their customers, without having to sign in to each individual customer tenant.

How it works

At a high level, here’s how Azure Lighthouse works:

  1. Identify the roles that your groups, service principals, or users will need to manage the customer’s Azure resources.
  2. Specify this access and onboard the customer to Azure Lighthouse either by publishing a Managed Service offer to Azure Marketplace, or by deploying an Azure Resource Manager template. This onboarding process creates the two resources described above (registration definition and registration assignment) in the customer’s tenant.
  3. Once the customer has been onboarded, authorized users sign in to your managing tenant and perform tasks at the specified customer scope (subscription or resource group) per the access that you defined. Customers can review all actions taken, and they can remove access at any time.

Conclusion

And thats a brief overview of Azure Lighthouse, you can find more detailed information, service descriptions and concepts in the Microsoft Documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 58: Azure Content Delivery Network

Its Day 58 of my 100 Days of Cloud journey, and todays post is a quick overview of Azure Content Delivery Network.

A content delivery network is a global distributed network of servers that deliver cached to users based on their location. Examples of content that can be delivered via a CDN is Websites or Blob Storage Data.

Overview

Azure CDN uses the concept of distributed servers called Point-of-Presence servers (or POPs for short). These POPs stored cached content on edge servers that are located close to the locations where the user requests the content from, therefore reducing latency.

The benefits of using Azure CDN to deliver web site assets include:

  • Better performance and improved user experience for end users.
  • Scaling for better hadling of high loads, such as product launches or seasonal sales.
  • Content is served to users directly from edge servers so that less traffic is sent to the origin server.

Azure CDN POP Locations are worldwide, and a full list can be found here.

How it works

Image and Steps Credit – Microsoft
  1. A user (Alice) requests a file (also called an asset) by using a URL with a special domain name, such as <endpoint name>.azureedge.net. This name can be an endpoint hostname or a custom domain. The DNS routes the request to the best performing POP location, which is usually the POP that is geographically closest to the user.
  2. If no edge servers in the POP have the file in their cache, the POP requests the file from the origin server. The origin server can be an Azure Web App, Azure Cloud Service, Azure Storage account, or any publicly accessible web server.
  3. The origin server returns the file to an edge server in the POP.
  4. An edge server in the POP caches the file and returns the file to the original requestor (Alice). The file remains cached on the edge server in the POP until the time-to-live (TTL) specified by its HTTP headers expires. If the origin server didn’t specify a TTL, the default TTL is seven days.
  5. Additional users can then request the same file by using the same URL that Alice used, and can also be directed to the same POP.
  6. If the TTL for the file hasn’t expired, the POP edge server returns the file directly from the cache. This process results in a faster, more responsive user experience.

In order to use CDN, you need to create a CDN Profile in your Azure Subscription. A CDN Profile is a collection of CDN Endpoints, and you can configure each endpoint to deliver specific content. You can then use the CDN profile in conjunction with your Azure App Service to deliver the App to the CDN locations in your Profile.

However one thing to note, if you are delivering different content types, you will need to create multiple CDN profiles. There are limits set per Azure Subscriptions on CDN, details can be found here.

There are different pricing tiers in CDN which apply to different content types, and you can avail of CDN Network services from Akamai or Verizon as well as Microsoft. You can find full details on pricing here.

Conclusion

You can get a full overview of Azure Content Delivery Network from Microsoft docs here. Hope ou enjoyed this post, until next time!