You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Azure AI Evaluation client library for Python - version 1.3.0
9
+
# Azure AI Evaluation client library for Python - version 1.4.0
10
10
11
11
12
12
Use Azure AI Evaluation SDK to assess the performance of your generative AI applications. Generative AI application generations are quantitatively measured with mathematical based metrics, AI-assisted quality and safety metrics. Metrics are defined as `evaluators`. Built-in or custom evaluators can provide comprehensive insights into the application's capabilities and limitations.
@@ -32,7 +32,7 @@ Azure AI SDK provides following to evaluate Generative AI Applications:
32
32
### Prerequisites
33
33
34
34
- Python 3.9 or later is required to use this package.
35
-
-[Optional] You must have [Azure AI Project][ai_project] or [Azure Open AI][azure_openai] to use AI-assisted evaluators
35
+
-[Optional] You must have [Azure AI Foundry Project][ai_project] or [Azure Open AI][azure_openai] to use AI-assisted evaluators
36
36
37
37
### Install the package
38
38
@@ -41,10 +41,6 @@ Install the Azure AI Evaluation SDK for Python with [pip][pip_link]:
41
41
```bash
42
42
pip install azure-ai-evaluation
43
43
```
44
-
If you want to track results in [AI Studio][ai_studio], install `remote` extra:
45
-
```python
46
-
pip install azure-ai-evaluation[remote]
47
-
```
48
44
49
45
## Key concepts
50
46
@@ -153,9 +149,9 @@ result = evaluate(
153
149
}
154
150
}
155
151
}
156
-
# Optionally provide your AI Studio project information to track your evaluation results in your Azure AI Studio project
152
+
# Optionally provide your AI Foundry project information to track your evaluation results in your Azure AI Foundry project
157
153
azure_ai_project= azure_ai_project,
158
-
# Optionally provide an output path to dump a json of metric summary, row level data and metric and studio URL
154
+
# Optionally provide an output path to dump a json of metric summary, row level data and metric and AI Foundry URL
159
155
output_path="./evaluation_results.json"
160
156
)
161
157
```
@@ -315,18 +311,18 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_con
0 commit comments