Skip to content

Commit d06cd7d

Browse files
committed
Update docs metadata
1 parent 56f70e9 commit d06cd7d

File tree

2 files changed

+19
-19
lines changed

2 files changed

+19
-19
lines changed

docs-ref-services/latest/ai-evaluation-readme.md

Lines changed: 11 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: Azure AI Evaluation client library for Python
33
keywords: Azure, python, SDK, API, azure-ai-evaluation, evaluation
4-
ms.date: 02/26/2025
4+
ms.date: 04/01/2025
55
ms.topic: reference
66
ms.devlang: python
77
ms.service: evaluation
88
---
9-
# Azure AI Evaluation client library for Python - version 1.3.0
9+
# Azure AI Evaluation client library for Python - version 1.4.0
1010

1111

1212
Use Azure AI Evaluation SDK to assess the performance of your generative AI applications. Generative AI application generations are quantitatively measured with mathematical based metrics, AI-assisted quality and safety metrics. Metrics are defined as `evaluators`. Built-in or custom evaluators can provide comprehensive insights into the application's capabilities and limitations.
@@ -32,7 +32,7 @@ Azure AI SDK provides following to evaluate Generative AI Applications:
3232
### Prerequisites
3333

3434
- Python 3.9 or later is required to use this package.
35-
- [Optional] You must have [Azure AI Project][ai_project] or [Azure Open AI][azure_openai] to use AI-assisted evaluators
35+
- [Optional] You must have [Azure AI Foundry Project][ai_project] or [Azure Open AI][azure_openai] to use AI-assisted evaluators
3636

3737
### Install the package
3838

@@ -41,10 +41,6 @@ Install the Azure AI Evaluation SDK for Python with [pip][pip_link]:
4141
```bash
4242
pip install azure-ai-evaluation
4343
```
44-
If you want to track results in [AI Studio][ai_studio], install `remote` extra:
45-
```python
46-
pip install azure-ai-evaluation[remote]
47-
```
4844

4945
## Key concepts
5046

@@ -153,9 +149,9 @@ result = evaluate(
153149
}
154150
}
155151
}
156-
# Optionally provide your AI Studio project information to track your evaluation results in your Azure AI Studio project
152+
# Optionally provide your AI Foundry project information to track your evaluation results in your Azure AI Foundry project
157153
azure_ai_project = azure_ai_project,
158-
# Optionally provide an output path to dump a json of metric summary, row level data and metric and studio URL
154+
# Optionally provide an output path to dump a json of metric summary, row level data and metric and AI Foundry URL
159155
output_path="./evaluation_results.json"
160156
)
161157
```
@@ -315,18 +311,18 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
315311

316312
<!-- LINKS -->
317313

318-
[source_code]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-evaluation_1.3.0/sdk/evaluation/azure-ai-evaluation
314+
[source_code]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-evaluation_1.4.0/sdk/evaluation/azure-ai-evaluation
319315
[evaluation_pypi]: https://pypi.org/project/azure-ai-evaluation/
320316
[evaluation_ref_docs]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview
321317
[evaluation_samples]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios
322318
[product_documentation]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk
323319
[python_logging]: https://docs.python.org/3/library/logging.html
324320
[sdk_logging_docs]: /azure/developer/python/azure-sdk-logging
325-
[azure_core_readme]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.3.0/sdk/core/azure-core/README.md
321+
[azure_core_readme]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.4.0/sdk/core/azure-core/README.md
326322
[pip_link]: https://pypi.org/project/pip/
327323
[azure_core_ref_docs]: https://aka.ms/azsdk-python-core-policies
328-
[azure_core]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.3.0/sdk/core/azure-core/README.md
329-
[azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-evaluation_1.3.0/sdk/identity/azure-identity
324+
[azure_core]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.4.0/sdk/core/azure-core/README.md
325+
[azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-evaluation_1.4.0/sdk/identity/azure-identity
330326
[cla]: https://cla.microsoft.com
331327
[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
332328
[coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
@@ -336,7 +332,7 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
336332
[evaluators]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview
337333
[evaluate_api]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview#azure-ai-evaluation-evaluate
338334
[evaluate_app]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Supported_Evaluation_Targets/Evaluate_App_Endpoint
339-
[evaluation_tsg]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.3.0/sdk/evaluation/azure-ai-evaluation/TROUBLESHOOTING.md
335+
[evaluation_tsg]: https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-evaluation_1.4.0/sdk/evaluation/azure-ai-evaluation/TROUBLESHOOTING.md
340336
[ai_studio]: https://learn.microsoft.com/azure/ai-studio/what-is-ai-studio
341337
[ai_project]: https://learn.microsoft.com/azure/ai-studio/how-to/create-projects?tabs=ai-studio
342338
[azure_openai]: https://learn.microsoft.com/azure/ai-services/openai/
@@ -352,3 +348,4 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
352348
[adversarial_simulation]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Simulators/Simulate_Adversarial_Data
353349
[simulate_with_conversation_starter]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Simulators/Simulate_Context-Relevant_Data/Simulate_From_Conversation_Starter
354350
[adversarial_jailbreak]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/simulator-interaction-data#simulating-jailbreak-attacks
351+

metadata/latest/azure-ai-evaluation.json

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"Name": "azure-ai-evaluation",
3-
"Version": "1.3.0",
3+
"Version": "1.4.0",
44
"DevVersion": null,
55
"DirectoryPath": "sdk/evaluation/azure-ai-evaluation",
66
"ServiceDirectory": "evaluation",
@@ -10,22 +10,25 @@
1010
"SdkType": "client",
1111
"IsNewSdk": true,
1212
"ArtifactName": "azure-ai-evaluation",
13-
"ReleaseStatus": "2025-02-28",
13+
"ReleaseStatus": "2025-03-27",
1414
"IncludedForValidation": false,
1515
"AdditionalValidationPackages": [
1616
""
1717
],
1818
"ArtifactDetails": {
19+
"safeName": "azureaievaluation",
1920
"name": "azure-ai-evaluation",
20-
"safeName": "azureaievaluation"
21+
"triggeringPaths": [
22+
"/sdk/evaluation/ci.yml"
23+
]
2124
},
2225
"CIParameters": {
2326
"CIMatrixConfigs": [
2427
{
28+
"GenerateVMJobs": true,
2529
"Selection": "sparse",
26-
"Path": "sdk/evaluation/platform-matrix.json",
2730
"Name": "ai_ci_matrix",
28-
"GenerateVMJobs": true
31+
"Path": "sdk/evaluation/platform-matrix.json"
2932
}
3033
]
3134
},

0 commit comments

Comments
 (0)