AzureProvider is a unified wrapper for three distinct Azure AI deployment types:
- Azure OpenAI — GPT-4o and other OpenAI models deployed in your Azure subscription
- Azure AI Foundry (Anthropic) — Claude models via Azure AI Foundry’s managed service
- Azure AI Inference (MaaS) — Llama, Mistral, Phi, and other models via Azure’s model-as-a-service API
AzureProvider interface. Logicore detects the deployment type automatically from the endpoint URL and model name, or you can set it explicitly.
Installation
Install Python dependencies
- Azure OpenAI (GPT models)
- Azure AI Foundry (Claude models)
- Azure AI Inference (MaaS)
Constructor parameters
The Azure deployment name — the name you chose when you deployed the model, not necessarily the raw model ID. For example, a deployment of
gpt-4o might be named "my-gpt4o-deployment". This value is passed as model in every API call.Your Azure API key. If omitted, the provider reads
AZURE_API_KEY then AZURE_OPENAI_API_KEY from the environment. Raises ValueError if neither is set.Your Azure resource endpoint URL, e.g.
"https://your-resource.openai.azure.com". If omitted, the provider reads AZURE_ENDPOINT then AZURE_OPENAI_ENDPOINT. Raises ValueError if not found.The Azure API version string. If omitted, a sensible default is applied per deployment type:
- OpenAI:
"2024-10-21" - Anthropic:
"2023-06-01" - Inference (MaaS):
"2024-05-01-preview"
Explicit deployment type:
"openai", "anthropic", or "inference". When omitted, the provider auto-detects based on the endpoint URL and deployment name using these heuristics:- Endpoint contains
openai.azure.comor name containsgpt→"openai" - Endpoint or name contains
anthropicorclaude→"anthropic" - Endpoint ends in
/v1(non-Azure-OpenAI) or containsinference→"inference" - Default fallback →
"openai"
Extra keyword arguments stored internally. Not currently forwarded to the SDK client.
Basic usage
- Azure OpenAI
- Azure AI Foundry (Anthropic)
- Azure AI Inference (MaaS)
Streaming
AzureProvider supports streaming for all deployment types. The Anthropic backend uses a thread-based queue to bridge the synchronous Anthropic stream API with asyncio:
Tool calling
Tool calling is fully supported for
model_type="openai" and model_type="inference" deployments. For model_type="anthropic", the Anthropic tool-use format is used automatically, but the current implementation returns the text response only (tool results from Anthropic calls are extracted from content text blocks).Vision / multimodal
Vision is available on Azure OpenAI deployments that use GPT-4o family models:image_url values:
- Local file path
https://image URLdata:image/...;base64,...inline data
Model type auto-detection
Ifmodel_type is omitted, AzureProvider applies the following heuristics in order:
model_type explicitly to avoid ambiguity, especially for Foundry endpoints that may match multiple patterns.
Troubleshooting
ValueError: Azure API key is required
ValueError: Azure API key is required
Neither
api_key, AZURE_API_KEY, nor AZURE_OPENAI_API_KEY is set. Retrieve the key from your Azure resource under Keys and Endpoint in the Azure Portal.ValueError: Azure Endpoint is required
ValueError: Azure Endpoint is required
Neither
endpoint, AZURE_ENDPOINT, nor AZURE_OPENAI_ENDPOINT is set. Copy the endpoint URL from your resource overview page in the Azure Portal.404 DeploymentNotFound
404 DeploymentNotFound
model_name must match the deployment name you created, not the underlying model name. In the Azure OpenAI Studio, navigate to Deployments and copy the exact deployment name.ImportError for anthropic / AnthropicFoundry not found
ImportError for anthropic / AnthropicFoundry not found
The Then verify your endpoint matches the format provided in the Azure AI Foundry portal.
anthropic package is required for Anthropic-type deployments. Install it with:AuthenticationError: 401 Unauthorized
AuthenticationError: 401 Unauthorized
Wrong model_type selected (auto-detect mismatch)
Wrong model_type selected (auto-detect mismatch)
If the provider initializes the wrong client, set
model_type explicitly: model_type="openai", model_type="anthropic", or model_type="inference". This bypasses heuristic detection entirely.