Exploring integration possibilities with Azure Monitor
As mentioned in the introduction, Azure Monitor provides a complete observability platform both inside and outside Azure. However, it may not be enough for every scenario. Every organization has unique monitoring requirements based on its infrastructure, applications, and business objectives. For instance, if your organization has invested in Splunk for security monitoring or any other third-party platform, integrating it with Azure Monitor allows you to leverage its advanced security features and ensure compliance with regulatory standards.
Furthermore, in a multi-cloud or hybrid cloud environment, achieving comprehensive visibility across all platforms can be challenging. Integrating Azure Monitor with third-party tools helps bridge this gap, providing a unified view of your entire infrastructure. This is particularly useful for organizations that use multiple cloud providers or manage on-premises and cloud resources. By centralizing data from different sources, you can gain a global view of your operations and make more informed decisions.
Ultimately, each third-party tool brings unique capabilities that complement Azure Monitor as your monitoring solution. To fully leverage those unique capabilities, it is essential to understand the various methods available for exporting data from Azure Monitor. Exporting this data ensures that the telemetry collected within Azure can be analyzed, visualized, and utilized within your preferred platforms, whether it’s through direct integration using APIs, setting up data export to external systems, or utilizing intermediary services. In the following section, we will explore these options in detail, guiding you through the steps required to export and integrate your monitoring data. We’ll explore three primary methods to export data from Azure Monitor – using the Azure Monitor REST API directly or through PowerShell/CLI, exporting logs to Azure Storage, and leveraging Event Hubs for real-time streaming.
Using Azure Monitor REST API
In Chapter 3, we explored the ingestion capabilities of the Azure Monitor REST API, which allows for the seamless collection of custom telemetry data from various sources, enabling comprehensive monitoring and observability. This API not only supports the ingestion of metrics or logs into Azure Monitor but also extends its functionality to extract this information.
By leveraging the Azure Monitor REST API, users can programmatically access and retrieve detailed monitoring data, facilitating integration with third-party tools and custom applications. This dual capability ensures that organizations can both centralize their monitoring data within Azure and efficiently export it to enhance their observability strategies, using external solutions.
Let’s show an example of how you can retrieve data from each type of telemetry that Azure Monitor supports.
Extracting Azure metrics
The Metrics REST API supports not only the retrieval of metrics values but also the metric definition and the metric dimension values. Information can be retrieved from a single instance of a specific resource or multiple resources.
The first step to extract metrics information from a resource on Azure using the Azure Monitor REST API is authentication. It ensures that only authorized users can retrieve monitoring data, thus maintaining the security and integrity of your Azure environment. The primary method of authentication is via OAuth 2.0, using Microsoft Entra ID to obtain an access token. This token must be included in the header of your API requests to authenticate and authorize access.
In a production environment, you would probably use a service principal, as described in Chapter 3 when ingesting custom data in Azure Monitor. However, in this case, we will show a simpler scenario for demonstration purposes, using the authentication token of your user. We will use the Azure CLI to obtain it, running the following command:
az account get-access-token
After that, the second step is to identify the resource and the metric we are interested in. The endpoint for retrieving metrics is structured as follows:
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/providers/microsoft.insights/metrics?api-version=2023-10-01&metricnames={metricNames}×pan={timespan}
You need to replace placeholders with your subscription ID, resource group name, resource provider namespace, resource type, resource name, desired metric names, and the time span for the metrics.
As an example, let’s obtain the CPU usage of the virtual machine we used in Chapter 2 for the last 24 hours. We will use curl
from the command line:
curl --location --request GET 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/packet-book/providers/Microsoft.Compute/virtualMachines/chapter2-vm/providers/microsoft.insights/metrics?api-version=2023-10-01&metricnames=Percentage%20CPU×pan=2024-05-26T00:00:00Z/2024-05-27T00:00:00Z' --header 'Content-Type: application/json' --header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJ…'
The output will contain a few details about the request you have made, together with a time series of the values available for the specific metric and period.
A detailed walk-through of the whole set of options for this type of request is available in the Microsoft Azure Monitor documentation, linked to in the Further reading section [1]. It contains more examples of retrieving metric definitions and dimension values, as well as examples of querying metrics for multiple resources.
However, when dealing with large-scale environments or the need to extract numerous metrics over extended periods, you might face challenges with rate limits. Azure Monitor imposes rate limits on the number of API calls you can make within a specific time frame. Exceeding these limits can result in throttling, where additional requests are temporarily blocked.
To address these challenges, Azure Monitor provides the getBatch API, which allows you to retrieve multiple metrics in a single request. This approach helps to reduce API calls by batching multiple metrics queries into a single API call and improving efficiency, minimizing the performance overhead through the consolidation of requests.
The authentication process is like the previous step; however, both the URL and the way to request the metrics are modified. Instead of using a GET HTTP request, the getBatch
API uses a POST HTTP request, but both share a common set of parameters and response formats.
As an example, let’s obtain again the CPU usage of the virtual machine we used in Chapter 2 for the last 24 hours and another two different VMs.
The API endpoint will have the following structure:
https://{azureRegion}.metrics.monitor.azure.com/subscriptions/{subscriptionId}/metrics:getBatch?starttime=2024-05-26T00:00:00Z&endtime=2024-05-27T00:00:00Z&interval=PT1H&metricNamespace=microsoft.compute%2Fvirtualmachines&metricnames=Percentage%20CPU&api-version=2023-10-01
The body of the POST
request will contain a JSON object with all the IDs of the resources we are interested in:
{ "resourceids": [ "/subscriptions/{subscriptionId}/resourceGroups/ packet-book/providers/Microsoft.Compute/ virtualMachines/chapter2-vm/", "/subscriptions/{subscriptionId}/resourceGroups/ packet-book/providers/Microsoft.Compute/ virtualMachines/chapter10a-vm/", "/subscriptions/{subscriptionId}/resourceGroups/ packet-book/providers/Microsoft.Compute/ virtualMachines/chapter10b-vm/" ] }
We will use curl
from the command line to submit our request:
curl --location --data '{"resourceids": [ "/subscriptions/{subscriptionId}/resourceGroups/packet-book/providers/Microsoft.Compute/virtualMachines/chapter2-vm/", "/subscriptions/{subscriptionId}/resourceGroups/packet-book/providers/Microsoft.Compute/virtualMachines/chapter10a-vm/", "/subscriptions/{subscriptionId}/resourceGroups/packet-book/providers/Microsoft.Compute/virtualMachines/chapter10b-vm/"]}' --request POST 'https:// {azureRegion}.metrics.monitor.azure.com/subscriptions/{subscriptionId}/metrics:getBatch?starttime=2024-05-26T00:00:00Z&endtime=2024-05-27T00:00:00Z&interval=PT1H&metricNamespace=microsoft.compute%2Fvirtualmachines&metricnames=Percentage%20CPU&api-version=2023-10-01' --header 'Content-Type: application/json' --header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIs […]'
In a single request, we get all the information across the three virtual machines. Let’s move on now to understand how to use the Azure Monitor REST API to extract log details.
Extracting Azure logs
The Azure Monitor REST API to extract Azure logs requires the same authentication as shown in the previous section. However, in this case, we need to use application credentials instead of our own token. The API endpoint is outside the default management domain, and we are required to authorize this application to read the data before the call can be executed.
After you have created your application, as covered in Chapter 3, you will need to go to the API permissions menu inside the properties of your application and look for Log Analytics API. After that, you should click on Delegated permissions and assign the Data.Read permission by clicking the checkbox next to it, as shown in the following screenshot.
Figure 9.1 – Configuration of the API permissions
Once the permissions are assigned, you should be able to get an access token through the Azure CLI after logging in with the application credentials instead of yours.
The endpoint to query activity log data is structured as follows:
https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId}/query?[parameters]
You need to specify several key parameters in your API request. The api-version
parameter, which identifies the version of the API you are using, should be set to v1
to ensure compatibility with the latest features and updates. Additionally, you must include your workspaceId
, which uniquely identifies your Azure workspace where the logs are stored. Along with these, the parameters
parameter is essential, as it encapsulates the specific data required for the query, such as the time range, resource group, and the types of events you wish to retrieve.
When using this API to query data, you can use both GET
and POST
HTTP methods, depending on how you want to structure your request. For a GET
request, the parameters are included directly in the query string. For example, let’s obtain the average latency in milliseconds for our availability test, configured for the web app deployed in Chapter 8:
curl --location --request GET 'https://api.loganalytics.azure.com/v1/workspaces/{workspaceId}/query?query=AppAvailabilityResults%20%7C%20summarize%20avg%28DurationMs%29%20by%20Location' --header 'Content-Type: application/json' --header 'Authorization: Bearer […]'
For a POST
request, the body of the request must be valid JSON and must include the Content-Type: application/json
header. The parameters are included as properties in the JSON body. If the timespan
parameter is specified in both the query string and the JSON body, the timespan used will be the intersection of the two values. For example, to obtain the same information as with the HTTP Get
call, we would need to submit the following query:
curl --location --request POST --data '{"query": " AppAvailabilityResults | summarize avg(DurationMs) by Location" }' 'https://api.loganalytics.azure.com/v1/workspaces/{workspaceId}/query?query=AppAvailabilityResults%20%7C%20summarize%20avg%28DurationMs%29%20by%20Location' --header 'Content-Type: application/json' --header 'Authorization: Bearer […]'
In this case, the JSON body contains the query parameter. This approach is useful for more complex queries or when you need to include additional parameters in a structured format.
After covering this scenario, let’s move on now to the last one – on how to use the Azure Monitor REST API to extract information about the activity logs.
Exporting activity logs
Before making requests to the Azure Monitor REST API to extract activity logs, you must authenticate using Microsoft Entra ID, as shown at the beginning of the Extracting Azure metrics section. In this case, you can use your own generated token or the one created for your application. The endpoint to query activity log data is structured as follows:
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter={filter}&$select={select}
In this URL, you replace placeholders with your subscription ID, filter criteria, and selected properties. The $filter
parameter is essential for narrowing down the set of returned logs. You can filter by time range, resource group, specific resource, or other criteria. The $select
parameter allows you to specify which fields to include in the response, such as event name, operation name, status, event timestamp, correlation ID, and level, to reduce the payload size and focus on relevant data.
For example, to get all the logs after a specific date, you should use the following query with the $
filter
parameter:
curl --location --request GET 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp%20ge%202024-06-16T04%3A36%3A37.6407898Z' --header 'Content-Type: application/json' --header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng […]'
If you want to restrict the results to a specific resource group and return only the results for a specific resource group, you can add an extra condition to the filter:
curl --location --request GET 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp%20ge%202024-06-16T04%3A36%3A37.6407898Z%20and%20resourceGroupName%20eq%20%27packt-book%27' --header 'Content-Type: application/json' --header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng […]'
If the information returned is more than you need, you can use the $select
parameter to return only the specific fields relevant to you, as shown in the following example. Only four properties are returned from the whole available set:
curl --location --request GET 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp%20ge%202024-06-16T04%3A36%3A37.6407898Z%20and%20resourceGroupName%20eq%20%27packt-book%27&$select=eventName,operationName,status,eventTimestamp' --header 'Content-Type: application/json' --header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng […]'
By using the Azure Monitor REST API and the Metrics Batch API, as shown in the previous examples, you can efficiently extract metrics information from your Azure resources, ensuring that you have the necessary data to monitor your cloud environment outside Azure Monitor.
While this approach provides a robust and flexible method to access detailed metrics and monitor data, integrating it into your scripts can sometimes be cumbersome. The need to manage authentication tokens, construct HTTP requests, and handle the responses programmatically requires significant effort, especially when dealing with large-scale environments or frequent data extraction tasks. To simplify this process, Azure offers alternative methods using PowerShell and the Azure CLI.
Using Azure PowerShell and CLI for log extraction
Azure PowerShell and Azure CLI tools provide streamlined commands and built-in functionality to extract logs and metrics, making it easier to incorporate monitoring data into your automation scripts and daily workflows. By leveraging PowerShell or CLI, you can efficiently retrieve the necessary information without the complexity associated with REST API calls, enabling quicker and more effective integration with your existing systems.
Using Azure PowerShell
Azure PowerShell provides the Get-AzLog
cmdlet (check the Further reading section for more details [2]), which enables you to query and retrieve activity logs with ease. This cmdlet simplifies the process of extracting logs by encapsulating the necessary API calls into a single, user-friendly command.
Use Connect-AzAccount
to authenticate and connect to your Azure subscription, and after that, use Get-AzLog
to fetch logs for a specific resource group or time range. The following example retrieves the latest 50 records from the previous week:
Connect-AzAccount Get-AzLog -StartTime (Get-Date).AddDays(-7) -EndTime (Get-Date) -ResourceGroupName {YourResourceGroup} -MaxRecord 50
The response will provide a detailed activity log entry for each event with information about the HTTP request, its properties, the resource affected, and the status of the request. The Get-AzLog
command provides a wide set of options to customize your request filtering by resource provider, resource group, or resource ID.
If you need to extract details about a metric instead of the activity logs, PowerShell provides the Get-AzMetric
[3] command. The following example retrieves the information about the Percentage CPU
metric used previously:
Get-AzMetric -ResourceId "/subscriptions/{subscriptionId}/resourceGroups/packet-book/providers/microsoft.compute/virtualmachines/chapter2-vm" -TimeGrain 00:01:00 -MetricName "Percentage CPU"
Similarly, if you want to retrieve information from a log inside your Azure Monitor Log Analytics workspace, the Invoke-AzOperationalInsightsQuery
command is provided by PowerShell to make queries to your workspace. Its name could be confusing, but as we discussed in Chapter 1, this was the old name of the service and PowerShell commands have kept it.
The following example retrieves the latest result entry from the AppAvailabilityResults
table that stores all the availability information, from the web application deployed in Chapter 8:
Invoke-AzOperationalInsightsQuery -WorkspaceId {LogAnalyticsWorkSpaceId} -Query "AppAvailabilityResults | take 1"
Let’s now review what the Azure CLI offers to extract information from Azure Monitor.
Using Azure CLI
Azure CLI provides the az monitor activity-log list
command, which offers similar functionality to retrieve activity logs. The CLI is a cross-platform tool that can be used on Windows, macOS, and Linux, making it a versatile option for automation.
Use az login
to authenticate and connect to your Azure account and, after that, az monitor activity-log list
to fetch logs for a specific resource group or time range. The following example retrieves the information for the last seven days inside the specific resource group included:
az login az monitor activity-log list --start-time (Get-Date).AddDays(-7) --end-time (Get-Date) --resource-group {YourResourceGroup}
If you need to extract details about a metric instead of the activity logs, Azure CLI provides the az monitor metrics list
command. The following example retrieves the information about the Percentage CPU
metric used previously:
az monitor metrics list --resource {resourceID} --metric "Percentage CPU"
Similarly, if you want to retrieve information from a log inside your Azure Monitor Log Analytics workspace, the az monitor log-analytics query
command is provided by the Azure CLI to make queries to your workspace.
The following example retrieves the latest result entry from the AppAvailabilityResults
table, as shown in the PowerShell example:
az monitor log-analytics query -w { LogAnalyticsWorkSpaceId } --analytics-query "AppAvailabilityResults | take 1"
By using Azure PowerShell and CLI, you can streamline the process of extracting logs and metrics, integrating them more easily into your automation scripts and monitoring workflows. These tools eliminate the need for complex API calls, making log extraction more straightforward and accessible for users of all skill levels.
Let’s continue by exploring other alternatives to export Azure monitoring information, without using the Azure Monitor APIs or its wrap for PowerShell and the Azure CLI.
Exporting logs and metrics to Azure Storage or Azure Event Hubs
Exporting logs to Azure Storage is a straightforward method to archive monitoring data and make it accessible for third-party tools. This method is particularly useful for long-term retention and batch-processing scenarios. On the other side, Azure Event Hubs provides a powerful platform for real-time data streaming, making it ideal for scenarios that require immediate processing and analysis of monitoring data.
Both export options are provided through the Diagnostic settings menu available for each resource, as shown in the following screenshot. It is possible to select not only a Log Analytics workspace as a destination for our logs but also to stream them to an Event Hub, or archive them to a storage account.
Figure 9.2 – Destination options for logs and metrics
Exporting the information to a storage account allows us to store large volumes of log data cost-effectively, provides durability through a reliable storage product with redundancy options, and improves its accessibility by third-party tools for ingestion and analysis. For example, an organization using Elastic for log analysis can periodically ingest log data from Azure Storage, enabling advanced search and analytics on the archived logs.
Streaming information to Event Hubs allows us to configure a low-latency pipeline, with near real-time data ingestion and processing that is scalable. This pipeline can handle large volumes of data with high throughput. For example, an organization using Splunk for security monitoring can stream log data to Event Hubs and set up Splunk to consume and process the data in real time, enabling immediate detection of and responses to security incidents.
In summary, to enable logs and metrics to be exported to Azure Storage, you need to do the following:
- Configure diagnostic settings: In the Azure portal, navigate to your resource and configure the diagnostic settings to specify which logs should be sent to Azure Storage.
- Select a storage account: Choose an existing storage account or create a new one to store your logs.
- Retain data: Set retention policies based on your organization’s requirements.
- Access logs: Third-party tools can access the logs by reading from the storage account, using Azure Storage SDKs or the REST API.
Alternatively, to enable the forwarding of the logs and metrics to Event Hubs, you would need to do the following:
- Configure diagnostic settings: In the Azure portal, set up diagnostic settings for your resources to send logs and metrics to Event Hubs.
- Create an event hub: Ensure that you have an Event Hub namespace and an event hub to receive the data.
- Stream data: Configure the diagnostic settings to route data to the event hub.
- Use consumer applications: Use consumer applications to read and process the data from Event Hubs. This can be custom applications, Azure Stream Analytics jobs, or third-party platforms that support Event Hubs integration.
In both cases, logs and metrics are exported before they have been ingested inside your Log Analytics Workspace; however, in addition to exporting logs before they are ingested, Azure Monitor also offers a feature within Log Analytics to export data directly from selected tables. This capability allows you to route your monitoring data to Azure Storage for long-term retention, or to Azure Event Hubs for real-time streaming and integration with third-party applications.
The export feature in Azure Log Analytics allows you to continuously export data from specific tables in your Log Analytics workspace to Azure Storage or Azure Event Hubs as they arrive. This is useful for archiving data, performing additional analysis, or integrating with other systems that consume monitoring data. This feature uses Microsoft Azure’s internal backbone, and information doesn’t leave the internal Microsoft network.
To set up export to Azure Storage, navigate to your Log Analytics workspace in the Azure portal, and under the Settings section, select Data export. Click on Create export rule, as shown in the following screenshot.
Figure 9.3 – Creating a new export rule for storage
A new configuration wizard will appear. You will need to provide a name for your new export rule and click Next. Then, a list of all the tables inside your Log Analytics Workspace will appear, allowing you to filter the ones to be exported, as shown in the following screenshot.
Figure 9.4 – The available tables as a source for data exporting
Select the ones relevant to you, and after clicking Next, the wizard will show you the destination options. Select the destination as Storage account or Event Hub and provide all the required details. If you choose Event Hub, ensure that it has the appropriate throughput units to handle the expected data volume.
Data export pricing model
Azure Log Analytics data export is not free. An extra export fee on top of the baseline price for Azure Storage or Event Hubs will be added based on the number of GB exported. Exported data is measured by the number of bytes in the exported JSON formatted data. As an equivalence, 1 GB equals 10^9 bytes.
As discussed in this section, Azure Monitor offers exporting capabilities that allow seamless integration with third-party tools, enhancing the monitoring, logging, and alerting functionalities beyond Azure’s native capabilities. Additionally, many external solutions have developed native integrations in collaboration with Microsoft, providing even more streamlined and effective ways to unify monitoring data. In the next section, we will explore how these external solutions can be integrated with the monitoring data available inside Azure.