title | keywords | author | ms.author | ms.date | ms.topic | ms.prod | ms.technology | ms.devlang | ms.service |
---|---|---|---|---|---|---|---|---|---|
Azure Monitor Query client library for Python |
Azure, python, SDK, API, azure-monitor-query, monitor |
maggiepint |
magpint |
11/10/2021 |
reference |
azure |
azure |
python |
monitor |
The Azure Monitor Query client library is used to execute read-only queries against Azure Monitor's two data platforms:
- Logs - Collects and organizes log and performance data from monitored resources. Data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from apps can be consolidated into a single Azure Log Analytics workspace. The various data types can be analyzed together using the Kusto Query Language.
- Metrics - Collects numeric data from monitored resources into a time series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time. Metrics are lightweight and capable of supporting near real-time scenarios, making them particularly useful for alerting and fast detection of issues.
Resources:
Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information and questions, please refer to Azure/azure-sdk-for-python#20691
- Python 2.7, or 3.6 or later
- An Azure subscription
- To query Logs, you need an Azure Log Analytics workspace.
- To query Metrics, you need an Azure resource of any kind (Storage Account, Key Vault, Cosmos DB, etc.).
Install the Azure Monitor Query client library for Python with pip:
pip install azure-monitor-query
An authenticated client is required to query Logs or Metrics. The library includes both synchronous and asynchronous forms of the clients. To authenticate, create an instance of a token credential. Use that instance when creating a LogsQueryClient
or MetricsQueryClient
. The following examples use DefaultAzureCredential
from the azure-identity package.
Consider the following example, which creates synchronous clients for both Logs and Metrics querying:
from azure.identity import DefaultAzureCredential
from azure.monitor.query import LogsQueryClient, MetricsQueryClient
credential = DefaultAzureCredential()
logs_client = LogsQueryClient(credential)
metrics_client = MetricsQueryClient(credential)
The asynchronous forms of the query client APIs are found in the .aio
-suffixed namespace. For example:
from azure.identity.aio import DefaultAzureCredential
from azure.monitor.query.aio import LogsQueryClient, MetricsQueryClient
credential = DefaultAzureCredential()
async_logs_client = LogsQueryClient(credential)
async_metrics_client = MetricsQueryClient(credential)
For examples of Logs and Metrics queries, see the Examples section.
The Log Analytics service applies throttling when the request rate is too high. Limits, such as the maximum number of rows returned, are also applied on the Kusto queries. For more information, see Rate and query limits.
If you're executing a batch logs query, a throttled request will return a LogsQueryError
object. That object's code
value will be ThrottledError
.
Each set of metric values is a time series with the following characteristics:
- The time the value was collected
- The resource associated with the value
- A namespace that acts like a category for the metric
- A metric name
- The value itself
- Some metrics may have multiple dimensions as described in multi-dimensional metrics. Custom metrics can have up to 10 dimensions.
This example shows getting a logs query. To handle the response and view it in a tabular form, the pandas library is used. See the samples if you choose not to use pandas.
The timespan
parameter specifies the time duration for which to query the data. This value can be one of the following:
- a
timedelta
- a
timedelta
and a start datetime - a start datetime/end datetime
For example:
import os
import pandas as pd
from datetime import datetime, timezone
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
query = """AppRequests | take 5"""
start_time=datetime(2021, 7, 2, tzinfo=timezone.utc)
end_time=datetime(2021, 7, 4, tzinfo=timezone.utc)
try:
response = client.query_workspace(
workspace_id=os.environ['LOG_WORKSPACE_ID'],
query=query,
timespan=(start_time, end_time)
)
if response.status == LogsQueryStatus.PARTIAL:
error = response.partial_error
data = response.partial_data
print(error.message)
elif response.status == LogsQueryStatus.SUCCESS:
data = response.tables
for table in data:
df = pd.DataFrame(data=table.rows, columns=table.columns)
print(df)
except HttpResponseError as err:
print("something fatal happened")
print (err)
The query_workspace
API returns either a LogsQueryResult
or a LogsQueryPartialResult
object. The batch_query
API returns a list that may contain LogsQueryResult
, LogsQueryPartialResult
, and LogsQueryError
objects. Here's a hierarchy of the response:
LogsQueryResult
|---statistics
|---visualization
|---tables (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---column_types
LogsQueryPartialResult
|---statistics
|---visualization
|---partial_error (a `LogsQueryError` object)
|---code
|---message
|---status
|---partial_data (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---column_types
The LogsQueryResult
directly iterates over the table as a convenience. For example, to handle a logs query response with tables and display it using pandas:
response = client.query(...)
for table in response:
df = pd.DataFrame(table.rows, columns=[col.name for col in table.columns])
A full sample can be found here.
In a similar fashion, to handle a batch logs query response:
for result in response:
if result.status == LogsQueryStatus.SUCCESS:
for table in result:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
A full sample can be found here.
The following example demonstrates sending multiple queries at the same time using the batch query API. The queries can either be represented as a list of LogsBatchQuery
objects or a dictionary. This example uses the former approach.
import os
from datetime import timedelta, datetime, timezone
import pandas as pd
from azure.monitor.query import LogsQueryClient, LogsBatchQuery, LogsQueryStatus
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
requests = [
LogsBatchQuery(
query="AzureActivity | summarize count()",
timespan=timedelta(hours=1),
workspace_id= os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """bad query""",
timespan=timedelta(days=1),
workspace_id= os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """let Weight = 92233720368547758;
range x from 1 to 3 step 1
| summarize percentilesw(x, Weight * 100, 50)""",
workspace_id= os.environ['LOG_WORKSPACE_ID'],
timespan=(datetime(2021, 6, 2, tzinfo=timezone.utc), datetime(2021, 6, 5, tzinfo=timezone.utc)), # (start, end)
include_statistics=True
),
]
results = client.query_batch(requests)
for res in results:
if res.status == LogsQueryStatus.FAILURE:
# this will be a LogsQueryError
print(res.message)
elif res.status == LogsQueryStatus.PARTIAL:
## this will be a LogsQueryPartialResult
print(res.partial_error.message)
for table in res.partial_data:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
elif res.status == LogsQueryStatus.SUCCESS:
## this will be a LogsQueryResult
table = res.tables[0]
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
The following example shows setting a server timeout in seconds. A gateway timeout is raised if the query takes more time than the mentioned timeout. The default is 180 seconds and can be set up to 10 minutes (600 seconds).
import os
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
response = client.query_workspace(
os.environ['LOG_WORKSPACE_ID'],
"range x from 1 to 10000000000 step 1 | count",
timespan=None,
server_timeout=1,
)
The same logs query can be executed across multiple Log Analytics workspaces. In addition to the Kusto query, the following parameters are required:
workspace_id
- The first (primary) workspace ID.additional_workspaces
- A list of workspaces, excluding the workspace provided in theworkspace_id
parameter. The parameter's list items may consist of the following identifier formats:- Qualified workspace names
- Workspace IDs
- Azure resource IDs
For example, the following query executes in three workspaces:
client.query_workspace(
<workspace_id>,
query,
additional_workspaces=['<workspace 2>', '<workspace 3>']
)
A full sample can be found here.
The following example gets metrics for an Event Grid subscription. The resource URI is that of an Event Grid topic.
The resource URI must be that of the resource for which metrics are being queried. It's normally of the format /subscriptions/<id>/resourceGroups/<rg-name>/providers/<source>/topics/<resource-name>
.
To find the resource URI:
- Navigate to your resource's page in the Azure portal.
- From the Overview blade, select the JSON View link.
- In the resulting JSON, copy the value of the
id
property.
NOTE: The metrics are returned in the order of the metric_names sent.
import os
from datetime import timedelta, datetime
from azure.monitor.query import MetricsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = MetricsQueryClient(credential)
start_time = datetime(2021, 5, 25)
duration = timedelta(days=1)
metrics_uri = os.environ['METRICS_RESOURCE_URI']
response = client.query_resource(
metrics_uri,
metric_names=["PublishSuccessCount"],
timespan=(start_time, duration)
)
for metric in response.metrics:
print(metric.name)
for time_series_element in metric.timeseries:
for metric_value in time_series_element.data:
print(metric_value.time_stamp)
The metrics query API returns a MetricsQueryResult
object. The MetricsQueryResult
object contains properties such as a list of Metric
-typed objects, granularity
, namespace
, and timespan
. The Metric
objects list can be accessed using the metrics
param. Each Metric
object in this list contains a list of TimeSeriesElement
objects. Each TimeSeriesElement
object contains data
and metadata_values
properties. In visual form, the object hierarchy of the response resembles the following structure:
MetricsQueryResult
|---granularity
|---timespan
|---cost
|---namespace
|---resource_region
|---metrics (list of `Metric` objects)
|---id
|---type
|---name
|---unit
|---timeseries (list of `TimeSeriesElement` objects)
|---metadata_values
|---data (list of data points represented by `MetricValue` objects)
import os
from azure.monitor.query import MetricsQueryClient, MetricAggregationType
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = MetricsQueryClient(credential)
metrics_uri = os.environ['METRICS_RESOURCE_URI']
response = client.query_resource(
metrics_uri,
metric_names=["MatchedEventCount"],
aggregations=[MetricAggregationType.COUNT]
)
for metric in response.metrics:
print(metric.name)
for time_series_element in metric.timeseries:
for metric_value in time_series_element.data:
if metric_value.count != 0:
print(
"There are {} matched events at {}".format(
metric_value.count,
metric_value.time_stamp
)
)
Enable the azure.monitor.query
logger to collect traces from the library.
Monitor Query client library will raise exceptions defined in Azure Core.
This library uses the standard logging library for logging. Basic information about HTTP sessions, such as URLs and headers, is logged at the INFO
level.
Optional keyword arguments can be passed in at the client and per-operation level. The azure-core
reference documentation describes available configurations for retries, logging, transport protocols, and more.
To learn more about Azure Monitor, see the Azure Monitor service documentation.
The following code samples show common scenarios with the Azure Monitor Query client library.
- Send a single query with LogsQueryClient and handle the response as a table (async sample)
- Send a single query with LogsQueryClient and handle the response in key-value form
- Send a single query with LogsQueryClient without pandas
- Send a single query with LogsQueryClient across multiple workspaces
- Send multiple queries with LogsQueryClient
- Send a single query with LogsQueryClient using server timeout
- Send a query using MetricsQueryClient (async sample)
- Get a list of metric namespaces (async sample)
- Get a list of metric definitions (async sample)
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.