100 Days of Cloud – Day 73: The Value of User Groups

Its Day 73 of my 100 Days of Cloud journey, and today its a quick post about the importance of attending and being a member of Azure and Cloud User Groups.

User Groups are a great way meet new people and network in the community, but also to learn new skills from guest speakers who are experts.

Over the last few weeks, I’ve attended some excellent User Group sessions with some awesome people in the Cloud Community, such as:

All of these User Groups and many more can be found on meetup.com, and you can also follow all of the speakers above on both Twitter (links above) or search for them on LinkedIn. Also, most of the sessions from these User Groups are available on their YouTube Channels a few days after the events.

So log on to meetup and search for a User Group or Community near you, or you can attend these awesome ones above while they are still hosted as online events!

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 72: Migrate On-Premise File Server to Azure Files or SharePoint?

Its Day 72 of my 100 Days of Cloud journey, and todays post attempts to answer a question that is now at the forefront of the majority of IT Departments across the world – we know how to migrate the rest of our Infrastructure and Applications to the Cloud, but whats the best solution for the File Server?

Traditional Cloud Migration Steps

The first step that most companies make into the Cloud is the Migration to Microsoft 365 from On-Premise Exchange, because the offer of hosted email is appealing due to how critical email communication is to businesses. However although there are numerous services available in the Microsoft 365 stack (which I’ll get into more detail on in a future post), most companies will only use Email and Teams following the migration.

Once Exchange is migrated, that leaves the rest of the infrastructure. We looked at Azure Migrate back on Day 18 and how it can assist with discovery, assessment and migration of on-premise workloads to Azure. Companies will make the decision to migrate their workloads from on-premise infrastructure to Azure IAAS or PAAS services based on the following factors:

  • Legacy or Unsupported Hardware that is expensive to replace.
  • Legacy or Unsupported Virtualization systems (older versions of VMware or Hyper-V).
  • Savings on Data Centre or Comms Room overheads such as power and cooling.
  • The ability to re-architect business applications at speed and scale without the need for additional hardware/software and complicated backup/recovery procedures in the failure.
  • Backup and Disaster Recovery costs to meet Compliance and Regulatory requirements.

Once thats done, the celebrations can begin. Its done! We’ve migrated to the Cloud! Party time! But wait, whats that sitting over in rack in the corner. Covered in dust, humming and drawing power as if to mock. You approach and see the lights flicker as the disks spin in protest at the read/write operations as they struggle to perform the IOPS required by that bloody Accounts spreadsheet ….

Yes, the File Server. Except its a long time since it was a simple file server. These days, File Servers encompass managing storage at an enterprise level with storage arrays, disk tiers and caching, redundancy and backup, not to mention the cost of the file server operating system upkeep and maintenance.

So we need to migrate the File Server as well, but what are our options?

SharePoint

SharePoint empowers your Departments and Project Teams with dynamic and productive team sites from which you can access and share files, data, news, and resources. Collaborate effortlessly and securely with team members inside and outside your organization, across PCs, Macs, and mobile devices.

All organizations with an Office365 subscription will have 1TB of storage available for use in SharePoint. Any additional storage is based on the amount of licensed users you have, and each user adds an additional 10GB of Storage yo that SharePoint storage pool. So for example, if you have 50 users, you would then have a total of 1.5TB of storage.

You also have the option to add on additional storage using Office 365 Extra File Storage, however this is limited to 25TB. This is only available as an option with the following plans:

  • Office 365 Enterprise E1
  • Office 365 Enterprise E2
  • Office 365 Enterprise E3
  • Office 365 Enterprise E4
  • Office 365 Enterprise E5
  • Office 365 A3 (faculty)
  • Office 365 A5 (faculty)
  • Office for the web with SharePoint Plan 1
  • Office for the web with SharePoint Plan 2
  • SharePoint Online Plan 1
  • SharePoint Online Plan 2
  • Microsoft 365 Business Basic
  • Microsoft 365 Business Standard
  • Microsoft 365 Business Premium
  • Microsoft 365 E3
  • Microsoft 365 E5
  • Microsoft 365 F1

If you move your files into SharePoint libraries, you can then use the OneDrive Sync Client to sync both the users’ individual files in OneDrive and also be used with SharePoint Online to sync libraries that the user requires frequent access to offline.

One important thing to remember – all licensed Office365 users have 1TB of personal storage available for use, but this storage does not contribute to the overall SharePoint storage pool. You can set sharing and storage limits on both OneDrive and SharePoint using the SharePoint Admin Center.

With Microsoft 365, you have a number of options to protect the data that you place into SharePoint Online and OneDrive for Business:

  • Restrict the ability to save, download, or print files on non-corporate owned devices.
  • Restrict the ability to offline sync files on non-corporate owned devices.
  • Control what users can do based on their geographic location or device class or platform.

We can also use additional features available in Azure AD Premium, Microsoft Intune, Office 365 ATP or Azure Information Protection to provide additional protections to the data stored in SharePoint.

You can find out more about SharePoint in the Microsoft documentation here.

Azure Files

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files file shares can be mounted concurrently by cloud or on-premises deployments.

  • SMB Azure file shares are accessible from Windows, Linux, and macOS clients.
  • NFS Azure Files shares are accessible from Linux or macOS clients.

Additionally, SMB Azure file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used. Azure Files is closer to the traditional on-premise file shares in that you can use both Active Directory and Azure AD-based authentication to access you can use Group Policy to map drives as you would have done with on-premise file shares.

Azure Files is housed on Azure Storage and has 2 distinct billing options:

  • The provisioned model is only available for premium file shares, which are file shares deployed in the FileStorage storage account kind.
  • The pay-as-you-go model is only available for standard file shares, which are file shares deployed in the general purpose version 2 (GPv2) storage account kind.

Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-committing to storage utilization. When you purchase reserved capacity, your reservation must specify the following dimensions:

  • Capacity – can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity reservation.
  • Term: Reservations can be purchased for either a one year or three year term.
  • Tier: The tier of Azure Files for the capacity reservation, which can be either premium, hot, and cool tiers.
  • Location: The Azure region for the capacity reservation.
  • Redundancy: The storage redundancy for the capacity reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.

Finally, you have the option of Azure File Sync which is a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM.

You can find out more about Azure Files here, and Azure File Sync here.

Conclusion and Final Thoughts

We’ve seen both options that are available in migrating File Servers to the Microsoft Cloud ecosystem.

From the options we’ve seen and in my opinion, SharePoint is more suited to smaller businesses who are planning to or have already migrated to Microsoft 365, while Azure Files is more suited to larger enterprises with multiple sites or regions that have higher levels of storage requirements.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 71: Microsoft Sentinel

Its Day 71 of my 100 Days of Cloud journey, and todays post is all about Microsoft Sentinel. This is the new name for Azure Sentinel, following on from the rebranding of a number of Microsoft Azure services at Ignite 2021.

Image Credit: Microsoft

Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution. It provides intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.

SIEM and SOAR

We briefly touched on SIEM and SOAR in the previous post on Microsoft Defender for Cloud. Before we go further, lets note what the definition of SIEM and SOAR is according to Gartner:

  • Security information and event management (SIEM) technology supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities (such as incident management, dashboards and reporting).
  • SOAR refers to technologies that enable organizations to collect inputs monitored by the security operations team. For example, alerts from the SIEM system and other security technologies — where incident analysis and triage can be performed by leveraging a combination of human and machine power — help define, prioritize and drive standardized incident response activities. SOAR tools allow an organization to define incident analysis and response procedures in a digital workflow format.

Overview of Sentinel Functionality

Microsoft Sentinel gives a single view of your entire estate across multiple devices, users, applications and infrastructure across both on-premise and multiple cloud environments. The key features are:

  • Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
  • Detect previously undetected threats, and minimize false positives using Microsoft’s analytics and unparalleled threat intelligence.
  • Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.
  • Respond to incidents rapidly with built-in orchestration and automation of common tasks.

Sentinel can ingest alerts from not just Microsoft solutions such as Defender, Office365 and Azure AD, but from a multitude of 3rd-party and multi cloud providers such as Akamai, Amazon, Barracuda, Cisco, Fortinet, Google, Qualys and Sophos (and thats just to name a few – you can find a full list here). These are whats known as Data Sources and the data is ingested using the wide range of built-in connectors that are available:

Image Credit: Microsoft

Once your data sources are connected, the data is monitored using Sentinel integration with Azure Monitor Workbooks, which allows you to visualize your data:

Image Credit: Microsoft

Once the data and workbooks are in place, Sentinel uses analytics and machine learning rules to map your network behaviour and to combine multiple related alerts into incidents which you can view as a group to investigate and resolve possible threats. The benefit here is that Sentinel lowers the noise that is created by multiple alerts and reduces the number of alerts that you need to react to:

Image Credit: Microsoft

Sentinel’s autotmation and orchestration playbooks are built on Azure Logic Apps, and there is growing gallery of built-in playbooks to choose from. These are based on standard and repeatable events, and in the same way as standard Logic Apps are triggered by a particular action or event:

Image Credit: Microsoft

Last but not least, Sentinel has investigation tools that go deep to find the root cause and scope of a potential security threat, and hunting tools based on the MITRE Framework which enable you to hunt for threats across your organization’s data sources before an event is triggered.

Do I need both Defender for Cloud and Sentinel?

My advice on this is yes – because they are 2 different products that integrate and complement each other

Sentinel has the ability to detect, investigate and remediate threats. In order for Sentinel to do this, it needs a stream of data from Defender for Cloud or other 3rd party solutions.

Conclusion

We’ve seen how powerful Microsoft Sentinel can be as a tool to protect your entire infrastructure across multiple providers and platforms. You can find more in-depth details on Microsoft Sentinel here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 70: Microsoft Defender for Cloud

Its Day 70 of my 100 Days of Cloud journey, and todays post is all about Azure Security Center! There’s one problem though, its not called that anymore ….

At Ignite 2021 Fall edition, Microsoft announced that the Azure Security Center and Azure Defender products were being rebranded and merged into Microsoft Defender for Cloud.

Overview

Defender for Cloud is a cloud-based tool for managing the security of your multi-vendor cloud and on-premises infrastructure. With Defender for Cloud, you can:

  • Assess: Understand your current security posture using Secure score which tells you your current security situation: the higher the score, the lower the identified risk level.
  • Secure: Harden all connected resources and services using either detailed remediation steps or an automated “Fix” button.
  • Defend: Detect and resolve threats to those resources and services, which can be sent as email alerts or streamed to SIEM (Security, Information and Event Management), SOAR (Security Orchestration, Automation, and Response) or IT Service Management solutions as required.
Image Credit: Microsoft

Pillars

Microsoft Defender for Cloud’s features cover the two broad pillars of cloud security:

  • Cloud security posture management

CSPM provides visibility to help you understand your current security situation, and hardening guidance to help improve your security.

Central to this is Secure Score, which continuously assesses your subscriptions and resources for security issues. It then presents the findings into a single score and provides recommended actions for improvement.

The guidance in Secure Score is provided by the Azure Security Benchmark, and you can also add other standards such as CIS, NIST or custom organization-specific requirements.

  • Cloud workload protection

Defender for Cloud offers security alerts that are powered by Microsoft Threat Intelligence. It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions.

The Defender plans page of Microsoft Defender for Cloud offers the following plans for comprehensive defenses for the compute, data, and service layers of your environment:

Microsoft Defender for servers

Microsoft Defender for Storage

Microsoft Defender for SQL

Microsoft Defender for Containers

Microsoft Defender for App Service

Microsoft Defender for Key Vault

Microsoft Defender for Resource Manager

Microsoft Defender for DNS

Microsoft Defender for open-source relational databases

Microsoft Defender for Azure Cosmos DB (Preview)

Azure, Hybrid and Multi-Cloud Protection

Defender for Cloud is an Azure-native service, so many Azure services are monitored and protected without the need for agent deployment. If agent deployment is needed, Defender for Cloud can deploy Log Analytics agent to gather data. Azure-native protections include:

  • Azure PAAS: Detect threats targeting Azure services including Azure App Service, Azure SQL, Azure Storage Account, and more data services.
  • Azure Data Services: automatically classify your data in Azure SQL, and get assessments for potential vulnerabilities across Azure SQL and Storage services.
  • Networks: reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access.

For hybrid environments and to protect your on-premise machines, these devices are registered with Azure Arc (which we touched on back on Day 44) and use Defender for Cloud’s advanced security features.

For other cloud providers such as AWS and GCP:

  • Defender for Cloud CSPM features assesses resources according to AWS or GCP’s according to their specific security requirements, and these are reflected in your secure score recommendations.
  • Microsoft Defender for servers brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint amongst other features.
  • Microsoft Defender for Containers brings threat detection and advanced defenses to your Amazon EKS and Google’s Kubernetes Engine (GKE) clusters.

We can see in the screenshot below how the Defender for Cloud overview page in the Azure Portal gives a full view of resources across Azure and multi cloud sunscriptions, including combined Secure score, Workload protections, Regulatory compliance, Firewall manager and Inventory.

Image Credit: Microsoft

Conclusion

You can find more in-depth details on how Microsoft Defender for Cloud can protect your Azure, Hybrid and Multi-Cloud Workloads here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 69: Azure Logic Apps

Its Day 69 of my 100 Days of Cloud journey, and today I’m getting my head around Azure Logic Apps.

Azure Logic Apps is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems.

Comparison with Azure Functions

If this sounds vaguely familiar, it should because all the way back on Day 55 we looked at Azure Functions, which by definition allows you to create serverless applications in Azure.

So they both do the same thing, right? Well yes and no. So at this stage it important to show what the differences are between them.

Lets start with Azure Functions:

  • Lets you run event-triggered code without having to explicitly provision or manage infrastructure.
  • Azure Functions have a “Code-First” (imperative) for user experience and are primarily authored via Visual Studio or another IDE.
  • Azure Functions have about a dozen built-in binding types (mainly for other Azure services). If there isn’t an existing binding, you will need to write custom code to create new bindings.
  • With Azure Functions, you have 3 pricing options. You can opt for an App Service Plan, which gives you dedicated resources. The second option is completely serverless, with the Consumption plan based on resources consumed and number of executions. The third option is Functions Premium, which is a hybrid of both the App Service Plan and Consumption Plan.
  • As Azure Functions are code-first, the only options for deployment are Visual Studio, Azure DevOps or FTP.

Now, lets compare that with Azure Logic Apps:

  • Azure Logic Apps is a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.
  • Logic Apps have a “Designer-First” (declarative) experience for the user by providing a visual workflow designer accessed via the Azure Portal.
  • Logic Apps have a large collection of connectors to Azure Services, SaaS applications (Microsoft and others), FTP and Enterprise Integration Pack for B2B scenarios; along with the ability to build custom connectors if one isn’t available. Examples of connectors are:
    • Azure services such as Blob Storage and Service Bus
    • Office 365 services such as Outlook, Excel, and SharePoint
    • Database servers such as SQL and Oracle
    • Enterprise systems such as SAP and IBM MQ
    • File shares such as FTP and SFTP
  • Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. However, there are different pricing tiers available, more information is available here.
  • There are many ways to manage and deploy Logic Apps. They can be created and updated directly in the Azure Portal (which will automatically create a JSON template). The JSON template can also be deployed via Azure DevOps and Visual Studio.

The following list describes just a few example tasks, business processes, and workloads that you can automate using the Azure Logic Apps service:

  • Schedule and send email notifications using Office 365 when a specific event happens, for example, a new file is uploaded.
  • Route and process customer orders across on-premises systems and cloud services.
  • Move uploaded files from an SFTP or FTP server to Azure Storage.
  • Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.

Concepts

These are the key concepts to be aware of:

  • Logic app – A logic app is the Azure resource you create when you want to develop a workflow. There are multiple logic app resource types that run in different environments.
  • Workflow – A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.
  • Trigger – A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account.
  • Action – An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

How Logic Apps work

The workflow begins with a trigger, which can have a pull or push pattern. Pull triggers are initiated when a regularly scheduled process finds new updates in the source data since its last pull, while push triggers are initiated each time new data is generated in the source itself.

Next, users define a series of actions that run either consecutively or concurrently, based on the specified trigger and schedule. Users can export the worklow to JSON and use this to create and deploy Logic Apps using tools like Visual Studio and Azure DevOps, or they can save logic apps as Azure Resource Manager templates to reuse.

Connectors

Connectors are the most powerful aspect of the structure of a Logic App. Connectors are blocks of pre-built operations that communicate with 3rd-party services as steps in the workflow. Connectors can be nested within each other to provide complex solutions that meet exact use case needs. 

Azure contains a catalog of hundreds of available connectors and users can leverage these connectors to accomplish tasks without requiring any coding experience. You can find the full list of connectors here.

Use Cases

The following are the common use cases for Logic Apps:

  • Send an email alert to users based on data being updated in an on-premises database.
  • Query a database and send email notifications based on result criteria.
  • Communication with external platforms and services.
  • Data transformation or ingestion.
  • Social media connectivity using built-in API connectors.
  • Timer- or content-based routing.
  • Create business-to-business (B2B) solutions.
  • Access Azure virtual network resources.

We saw in the templates above how we can use an Event Grid resource event as a trigger, the tutorial here gives an excellent run through of creating a Logic App based on an Event Grid resource event and using the O365 Email connector.

Conclusion

So thats an overview of Azure Logic Apps and also how it compares to Azure Functions. You can find out more about Azure Logic Apps in the official documentation here.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 68: Azure Service Bus

Its Day 68 of my 100 Days of Cloud Journey, and today I’m looking at Azure Service Bus.

In the previous posts, we looked at the different services that Microsoft uses to handle events:

  • Azure Event Grid, which is an eventing backplane that enables event-driven, reactive programming. It uses the publish-subscribe model. Publishers emit events, but have no expectation about how the events are handled. Subscribers decide on which events they want to handle.
  • Azure Event Hubs, which is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. 

Events v Messages

Both of the above services are based on events, and its important to understand the definition of an event:

  • An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
  • Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn’t have the data that triggered the event.

By contrast, a message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.

Azure Service Bus

Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics (in a namespace). Service Bus is used to decouple applications and services from each other, whether hosted natively on Azure, on-premise, or from any other cloud vendor such as AWS or GCP. Messages are sent and kept into queues or topics until requested by consumers in a “poll” mode (i.e. only delivered when requested).

Azure Service Bus provides the following benefits:

  • Load-balancing work across competing workers
  • Safely routing and transferring data and control across service and application boundaries
  • Coordinating transactional work that requires a high-degree of reliability

Concepts

  • Queues – Messages are sent to and received from queues. Queues store messages until the receiving application is available to receive and process them. Messages are kept in the queue until picked up by consumers, and are retrieved is on a first-in-first-out (FIFO) basis. A queue can have one or many competing consumers, but a message is consumed only once.
Image Credit: Microsoft
  • Topics: A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a publish and subscribe pattern. It’s useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions.
Image Credit: Microsoft

You can define rules on a subscription. A subscription rule has a filter to define a condition for the message to be copied into the subscription and an optional action that can modify message metadata.

  • Namespaces – A namespace is a container for all messaging components (queues and topics). Multiple queues and topics can be in a single namespace, and namespaces often serve as application containers.

There are also a number of advanced features available in Azure Service Bus:

  • Dead Letter Queue: this is a sub-queue to hold messages that could not be delivered or processed.
  • Consumption Mode: Azure Service Bus supports several consumption modes: pub/sub with a pull model, competing consumers, and partitioning can be achieved with the use of topics, subscriptions, and actions.
  • Duplicate Detection: Azure Service Bus supports duplicate detection natively.
  • Delivery Guarantee: Azure Service Bus supports three delivery guarantees: At-least-once, At-most-once, and Effectively once.
  • Message Ordering: Azure Service Bus can guarantee first-in-first-out using sessions.

Conclusion

Thats a brief overview of Azure Service Bus. You can learn more about Azure Service Bus in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 67: Azure Event Hubs

Its Day 67 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Hubs.

In the last post, we looked at Azure Event Grid, which is a serverless offering that allows you to easily build applications with event-based architectures. Azure Event Grid contains a number of sources, and one of those is Azure Event Hub.

Whereas Azure Event Grid can take in events from sources and trigger actions based on those events, Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.

One of the key difference between the 2 services is that while Event Grid can plug directly into Azure services and listen for events coming from their sources, Event Hubs can listen for events coming from sources outside of Azure, and can handle millions of events coming from multiple devices.

The following scenarios are some of the scenarios where you can use Event Hubs:

  • Anomaly detection (fraud/outliers)
  • Application logging
  • Analytics pipelines, such as clickstreams
  • Live dashboards
  • Archiving data
  • Transaction processing
  • User telemetry processing
  • Device telemetry streaming

Concepts

Azure Event Hubs represent the “front door” (or Event Ingestor to give it the correct name) for an event pipeline, and sits between event producers and event consumers. It decouples the process of producing data from the process of consuming data. You can publish events individually or in batches.

Azure Event Hubs are built around the concept of partitions and consumer groups. Inside an Event Hub, events are sent to partitions by specifying the partition key or partition id. Partition count of an Event Hub cannot be changed after creation so is mindful of this limitation.

Image Credit: Microsoft

Receivers are grouped into consumer groups. A consumer group represents a view (state, position, or offset) of an entire event hub. It can be thought of as a set of parallel applications that consume events at the same time.

Consumer groups enable receivers to each have a separate view of the event stream. They read the stream independently at their own pace and with their own offsets. Event Hub uses a partitioned consumer pattern; events are spread across partitions to allow horizontal scale. Events can be stored in either Blob Storage or Data Lake, this is configured when the initial event hub is created.

Image Credit: Microsoft

Use Cases

Event Hubs is the component to use for real-time and/or streaming data use cases:

  • Real-time reporting
  • Capture streaming data into files for further processing and analysis – e.g. capturing data from micro-service applications or a mobile app
  • Make data available to stream-processing and analytics services – e.g. when scoring an AI algorithm
  • Telemetry streaming & processing
  • Application logging

Event Hubs is also available as a feature for Azure Stack Hub, which allows you to realize hybrid cloud scenarios. Streaming and event-based solutions are supported, for both on-premises and Azure cloud processing.

Conclusion

You can learn more about Event Hubs in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 66: Azure Event Grid

Its Day 66 of my 100 Days of Cloud Journey, and today I’m looking at Azure Event Grid, which I came across during my Az-204 studies.

Azure Event Grid is a serverless offering that allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to.

Image Credit: Microsoft

Concepts

Azure Event Grid uses the following concepts which you will need to understand:

  • Events: An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. An example of common events would be a file being uploaded, a Virtual Machine deleted, a SKU being added etc.
  • Publishers: A publisher is the user or organization that decides to send events to Event Grid.
  • Event Sources: An event source is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. The following Azure services support sending events to Event Grid:
    • Azure API Management
    • Azure App Configuration
    • Azure App Service
    • Azure Blob Storage
    • Azure Cache for Redis
    • Azure Communication Services
    • Azure Container Registry
    • Azure Event Hubs
    • Azure FarmBeats
    • Azure IoT Hub
    • Azure Key Vault
    • Azure Kubernetes Service (preview)
    • Azure Machine Learning
    • Azure Maps
    • Azure Media Services
    • Azure Policy
    • Azure resource groups
    • Azure Service Bus
    • Azure SignalR
    • Azure subscriptions
  • Topics: The event grid topic provides an endpoint where the source sends events. A topic is used for a collection of related events.
  • Event Subscriptions:A subscription tells Event Grid which events on a topic you’re interested in receiving. When creating the subscription, you provide an endpoint for handling the event.
  • Event Handlers: From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. The supported event handlers are:
    • Webhooks. Azure Automation runbooks and Logic Apps are supported via webhooks.
    • Azure functions
    • Event hubs
    • Service Bus queues and topics
    • Relay hybrid connections
    • Storage queues

Capabilities

The key features of Azure Event Grid are:

  • Simplicity – You can direct events from any of the sources listed above to any event handler or endpoint.
  • Advanced filtering – Filter on event type to ensure event handlers only receive relevant events.
  • Fan-out – Subscribe several endpoints to the same event to send copies of the event to as many places as needed.
  • Reliability – 24-hour retry ensures that events are delivered.
  • Pay-per-event – Azure Event Grid uses a pay-per-event pricing model, so you only pay for what you use. The first 100,000 operations per month are free.
  • High throughput – Build high-volume workloads on Event Grid and scale up/down or in/out as required.
  • Built-in Events – Get up and running quickly with resource-defined built-in events.
  • Custom Events – Use Event Grid to route, filter, and reliably deliver custom events in your app.

Use Cases

Azure Event Grid provides several features that vastly improve serverless, ops automation, and integration work:

  • Serverless application architectures
Image Credit: Microsoft

Event Grid connects data sources and event handlers. For example, use Event Grid to trigger a serverless function that analyzes images when added to a blob storage container.

  • Ops Automation
Image Credit: Microsoft

Event Grid allows you to speed automation and simplify policy enforcement. For example, use Event Grid to notify Azure Automation when a virtual machine or database in Azure SQL is created. Use the events to automatically check that service configurations are compliant, put metadata into operations tools, tag virtual machines, or file work items.

  • Application integration
Image Credit: Microsoft

Event Grid connects your app with other services. For example, create a custom topic to send your app’s event data to Event Grid, and take advantage of its reliable delivery, advanced routing, and direct integration with Azure. Or, you can use Event Grid with Logic Apps to process data anywhere, without writing code.

Conclusion

You can learn more about Event Grid in the Microsoft documentation here. Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 65: AZ-204 Exam Day!

Its Day 65 of my 100 Days of Cloud Journey, and today I sat Exam AZ-204: Developing Solutons for Microsoft Azure.

This was using up my Exam Voucher that I earned for completing the Cloud Skill Challenge from Microsoft Ignite Fall 2021.

The reason I chose this exam was because it was a bit of a challenge – coming from a pure Infrastructure background, the idea of taking a Developer exam was something I never would have thought about a year ago. I’d heard of some of the concepts that were covered in the objectives, and the other exam options were either on the Cloud Infra or Security side, so I decided to go outside of my comfort zone and take on the challenge.

The skills measured list reads like this:

  • Develop Azure compute solutions (25-30%)
  • Develop for Azure storage (15-20%)
  • Implement Azure security (20-25%)
  • Monitor, troubleshoot, and optimize Azure solutions (15-20%)
  • Connect to and consume Azure services and third-party services (15-20%)

Looks like any other Infra-based exam we’ve seen before, right? Well, think again ….

Giving an NDA-friendly review, the exam goes deep into topics such as serverless compute, application and database development, authentication and monitoring. Again, all of this seems very infra focused, but the way I look at it is this – if you’ve done the Infra side of things like AZ-104 where you learn about how the infra works and how to put it together, AZ-204 approaches that from a far deeper dive, looking deep into the concepts and showing how we can programmatically manage these technologies.

And I’m delighted to say I made it out on the other side and passed the exam!

So does this make me an expert in Azure Development or turn me into a devloper? Well no, not overnight but its a big step in that direction. The thing with this exam is that even though I passed, I feel there is LOADS more content, demos and lab scenarios that I can look into – the exam itself will only scratch the surface, but not that I’ve gotten into this and started to go down the rabbit hole, its given me loads of ideas of how to use the content and technologies that I learned!

For learning paths on this one, I used the following:

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 64: Azure Cosmos DB

Its Day 64 of my 100 Days of Cloud journey, and today I’m looking at Azure Cosmos DB.

In the last post, we looked at Azure SQL and the different options we have available for hosting SQL Databases in Azure. SQL is an example of a Relational Database Management System (RDBMS), which follows a traditional model of storing data using 2-dimensional tables where data is stored in columns and rows in a pre-defined schema.

The opposite to this is non-relational databases, which use a storage model that is optimized for the specific requirements of the type of data being stored. Non-relational databases can have the following structures:

  • Document Data Stores, which stores data in JSON, XML, YAML or plain text format.
  • Columnar Data Stores, which stores data in column families which are logically related and manipulated as a unit.
  • Key/value Data Stores, which holds a data value that has a corresponding key.
  • Graph Databases, which are made up of nodes and edges to host data such as Organization Charts and Fraud detection.

All of the above options can be achieved by using Azure Cosmos DB.

Overview

Lets start with an overview – Azure Cosmos DB is a fully managed NoSQL database provides high availability, globally-distributed access to data with very low latency

If we log on to the Azure Portal and go to create an Azure Cosmos DB, we are given the options below:

The different API’s available are:

  • Core (SQL) API: Provides the flexibility of a NoSQL document store combined with the power of SQL for querying.
  • MongoDB API: Supports the MongoDB wire protocol so that existing MongoDB client continue to work with Azure Cosmos DB as if they are running against an actual MongoDB database.
  • Cassandra API: Supports the Cassandra wire protocol so that existing Apache drivers compliant with CQLv4 continue to work with Azure Cosmos DB as if they are running against an actual Cassandra database.
  • Gremlin API: Supports graph data with Apache TinkerPop (a graph computing framework) and the Gremlin query language.
  • Table API: Provides premium capabilities for applications written for Azure Table storage.

The key to picking an API is to select the one that best meets the needs for your database, but be warned: if you pick an API you cannot change it afterwards. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.

Once your API is selected, you get into the usual screens for creating resources in Azure:

Pricing

Now this is where we need to talk about pricing – in SQL, we are familiar with licensing using Cores. This works the same way in Azure with the concept of vCores, but we also have the concept of Database Transaction Units (DTU’s) which is based on a bundled measure of compute, storage, and I/O resources.

In Azure Cosmos DB, usage is priced based on Request Units (RUs). You can think of RUs per second as the currency for throughput. As shown in the screenshot above, there are 2 pricing models available:

  • Provisioned throughput mode: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. You are billed on an hourly basis for the number of RUs per second you have provisioned.
  • Serverless mode: In this mode, you don’t have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.

We also have a 3rd option:

  • Autoscale mode: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload.

Each request to Azure Cosmos DB returns used RUs to you so you can decide whether stop your requests or increase the RU limit on the Azure portal.

Consistency Levels

The other important thing to note about Cosmos DB is Consistency Levels. Because Cosmos DB is a globally distributed database, you can set the level of consistency for replication across your global data centers. There are 5 levels to choose from:

  • Strong consistency is the strictest type of consistency available in CosmosDB. The data is synchronously replicated to all the replicas in real-time. This mode of consistency is useful for applications that cannot tolerate any data loss in case of downtime.
  • In the Bounded Staleness level, data is replicated asynchronously with a predetermined staleness window defined either by numbers of writes or a period of time. The reads query may lag behind by either a certain number of writes or by a pre-defined time period. However, the reads are guaranteed to honor the sequence of the data.
  • Session consistency is the default consistency that you get while configuring the cosmos DB account. This level of consistency honors the client session. It ensures a strong consistency for an application session with the same session token.
  • Consistent prefix model is similar to bounded staleness except, the operational or time lag guarantee. The replicas guarantee the consistency and order of the writes however the data is not always current. This model ensures that the user never sees an out-of-order write.
  • Eventual consistency is the weakest consistency level of all. The first thing to consider in this model is that there is no guarantee on the order of the data and also no guarantee of how long the data can take to replicate. As the name suggests, the reads are consistent, but eventually.

Use Cases

Any web, mobile, gaming, and IoT application that needs to handle massive amounts of data, reads, and writes at a global scale with near-real response times for a variety of data will benefit from Cosmos DB’s guaranteed high availability, high throughput, low latency, and tunable consistency. The Microsoft Docs article here describes the common use cases for Azure Cosmos DB.

Conclusion

And thats a look at the different options avaiable in Azure Cosmos DB. Hope you enjoyed this post, until next time!