Skip to content

Latest commit

 

History

History
146 lines (103 loc) · 6.53 KB

assistants-rest.md

File metadata and controls

146 lines (103 loc) · 6.53 KB
title titleSuffix description manager author ms.author ms.service ms.topic ms.date
Quickstart: Use the OpenAI Service to make your AI Assistant with the REST API
Azure OpenAI Service
Walkthrough on how to get started with Azure OpenAI AI Assistants API with the REST API.
nitinme
aahill
aahi
azure-ai-openai
include
05/20/2024

Prerequisites

Set up

Retrieve key and endpoint

To successfully make a call against Azure OpenAI, you'll need the following:

Variable name Value
ENDPOINT The service endpoint can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the endpoint via the Deployments page in Azure AI Foundry portal. An example endpoint is: https://docs-test-001.openai.azure.com/.
API-KEY This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.
DEPLOYMENT-NAME This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal or via the Deployments page in Azure AI Foundry portal.

Go to your resource in the Azure portal. The Endpoint and Keys can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1 or KEY2. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.

:::image type="content" source="../media/quickstarts/endpoint.png" alt-text="Screenshot of the overview blade for an Azure OpenAI resource in the Azure portal with the endpoint & access keys location circled in red." lightbox="../media/quickstarts/endpoint.png":::

[!INCLUDE environment-variables]

REST API

Create an assistant

Note

With Azure OpenAI the model parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to "model": "{your-custom-model-deployment-name}".

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-05-01-preview \
  -H "api-key: $AZURE_OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "instructions": "You are an AI assistant that can write code to help answer math questions.",
    "name": "Math Assist",
    "tools": [{"type": "code_interpreter"}],
    "model": "gpt-4-1106-preview"
  }'

Tools

An individual assistant can access up to 128 tools including code interpreter, as well as any custom tools you create via functions.

Create a thread

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads \
  -H "Content-Type: application/json" \
  -H "api-key: $AZURE_OPENAI_API_KEY" \
  -d ''

Add a user question to the thread

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/messages \
  -H "Content-Type: application/json" \
  -H "api-key: $AZURE_OPENAI_API_KEY" \
  -d '{
      "role": "user",
      "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?"
    }'

Run the thread

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs \
  -H "api-key: $AZURE_OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "assistant_id": "asst_abc123",
  }'

Retrieve the status of the run

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs/run_abc123 \
  -H "api-key: $AZURE_OPENAI_API_KEY" \

Assistant response

curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/messages \
  -H "Content-Type: application/json" \
  -H "api-key: $AZURE_OPENAI_API_KEY" \

Understanding your results

In this example we create an assistant with code interpreter enabled. When we ask the assistant a math question it translates the question into python code and executes the code in sandboxed environment in order to determine the answer to the question. The code the model creates and tests to arrive at an answer is:

    from sympy import symbols, Eq, solve  
      
    # Define the variable  
    x = symbols('x')  
      
    # Define the equation  
    equation = Eq(3*x + 11, 14)  
      
    # Solve the equation  
    solution = solve(equation, x)  
    solution  

It is important to remember that while code interpreter gives the model the capability to respond to more complex queries by converting the questions into code and running that code iteratively in the Python sandbox until it reaches a solution, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.

Clean up resources

If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

See also