azure-openai provider supports the Azure OpenAI /chat/completions endpoint.
If you’re using the new Microsoft Foundry portal, use the
microsoft-foundry provider instead.Quick Start
Authentication
Set your Azure OpenAI API key as an environment variable:Configuration Options
BAML-Specific Options
These options modify the API request sent to Azure OpenAI.Injected via the
API-KEY header.Your Azure OpenAI resource name. Used to construct the base URL.The base URL is constructed as:
Your Azure OpenAI deployment ID. This is the name you gave your model deployment in the Azure portal.
The Azure OpenAI API version. Passed as a query parameter
api-version.Common versions:2024-02-012023-12-01-preview
Alternative to
resource_name and deployment_id. Specify the full base URL directly:Additional headers to send with requests.
Model Parameters
These parameters are passed directly to the Azure OpenAI API. Common parameters:temperature- Controls randomness (0-2)max_tokens- Maximum tokens to generatetop_p- Nucleus sampling parameterfrequency_penalty- Reduces repetition (-2.0 to 2.0)presence_penalty- Encourages new topics (-2.0 to 2.0)
Finding Your Configuration
In the Azure Portal:- Go to your Azure OpenAI resource
- Navigate to “Keys and Endpoint”
- Find:
- Resource name: In the endpoint URL (e.g.,
https://YOUR-RESOURCE.openai.azure.com/) - API key: Under “Key 1” or “Key 2”
- Resource name: In the endpoint URL (e.g.,
- Navigate to “Model deployments” to find your deployment_id
Features
- Streaming: Supported for real-time response generation
- Multimodal: Support depends on your deployed model
- Azure Integration: Works with Azure AD authentication and private endpoints
- Regional Deployment: Deploy in your preferred Azure region
Do Not Set
BAML automatically constructs this from your prompt.
BAML automatically sets this based on how you call the client in your code.