Azure OpenAI Provider
Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including GPT-4, GPT-4o, and GPT-3.5-Turbo, with enterprise-grade security, compliance, and regional availability. The Microsoft Agent Framework provides multiple ways to work with Azure OpenAI depending on your use case.Provider Options
Azure OpenAI integration offers three main approaches:Responses Client
Direct chat completions with streaming and tools
Chat Client
Chat-based interactions with conversation management
Azure AI Project
Persistent agents with Azure AI Foundry projects
Installation
Authentication
Azure OpenAI supports multiple authentication methods:Azure CLI Credential (Recommended for Development)
Managed Identity (Recommended for Production)
API Key (Not Recommended)
Azure OpenAI Responses Client
The Responses Client provides direct access to Azure OpenAI chat completions with support for streaming, function calling, and structured outputs.Basic Usage
Configuration
- Environment Variables
- Explicit Configuration
Streaming Responses
Function Calling
Azure OpenAI Chat Client
The Chat Client provides chat-based interactions with conversation management.Azure AI Project Agent Provider
For persistent agents managed through Azure AI Foundry projects, use theAzureAIProjectAgentProvider.
Setup
- Create an Azure AI Foundry project at ai.azure.com
- Get your project endpoint from the project settings
- Deploy a model to your project
Basic Usage
Configuration
- Environment Variables
- Explicit Configuration
Working with Existing Agents
Available Models
Azure OpenAI supports various model families:| Model Family | Models | Best For |
|---|---|---|
| GPT-4o | gpt-4o, gpt-4o-mini | Latest multimodal model, vision, function calling |
| GPT-4 | gpt-4, gpt-4-32k | Complex reasoning, high-quality outputs |
| GPT-3.5 | gpt-35-turbo | Fast, cost-effective chat |
| o1/o3 | o1-preview, o1-mini, o3-mini | Advanced reasoning tasks |
Model availability varies by region. Check the Azure OpenAI model availability page for details.
Advanced Features
Code Interpreter
Azure OpenAI Assistants support code interpreter for Python code execution:File Search
Enable file search for RAG capabilities:Structured Outputs
Best Practices
Use Managed Identities in Production
Use Managed Identities in Production
Always use Azure managed identities for authentication in production environments. This eliminates the need to store API keys and provides better security through Azure AD.
Choose the Right Client Type
Choose the Right Client Type
- Use Responses Client for simple chat completions and streaming
- Use Chat Client when you need conversation management
- Use Azure AI Project Provider for persistent agents with managed state
Enable Retry Logic
Enable Retry Logic
Azure OpenAI has rate limits. Implement retry logic with exponential backoff for production applications.
Monitor Costs
Monitor Costs
Different models have different pricing. Use GPT-4o-mini for development and testing, and reserve GPT-4o for production workloads that require maximum quality.
Regional Deployment
Regional Deployment
Deploy models in regions close to your users for better latency. Azure OpenAI is available in multiple regions worldwide.
Troubleshooting
Authentication Errors
Authentication Errors
If you see authentication errors:
- Run
az loginto authenticate with Azure CLI - Ensure your account has access to the Azure OpenAI resource
- Check that the correct subscription is selected:
az account show - Verify RBAC roles: You need “Cognitive Services OpenAI User” or higher
Rate Limit Errors
Rate Limit Errors
Azure OpenAI has rate limits based on your deployment:
- Check your quota in the Azure Portal
- Implement exponential backoff retry logic
- Consider requesting a quota increase
- Use multiple deployments to distribute load
Model Not Found
Model Not Found
If you get model not found errors:
- Verify the deployment name matches exactly
- Check that the model is deployed in your resource
- Ensure you’re using the correct endpoint
- Verify the API version is compatible with the model
Next Steps
Function Tools
Learn how to add function calling to your agents
Sessions & Memory
Manage multi-turn conversations and sessions
Workflows
Build complex multi-agent workflows
Deploy to Azure
Deploy your agents to Azure Functions