Skip to main content
AzureProvider is a unified wrapper for three distinct Azure AI deployment types:
  • Azure OpenAI — GPT-4o and other OpenAI models deployed in your Azure subscription
  • Azure AI Foundry (Anthropic) — Claude models via Azure AI Foundry’s managed service
  • Azure AI Inference (MaaS) — Llama, Mistral, Phi, and other models via Azure’s model-as-a-service API
All three backends share the same AzureProvider interface. Logicore detects the deployment type automatically from the endpoint URL and model name, or you can set it explicitly.

Installation

1

Install Python dependencies

pip install logicore openai
2

Set environment variables

export AZURE_API_KEY=your_key_here
export AZURE_ENDPOINT=https://your-resource.openai.azure.com
On Windows:
set AZURE_API_KEY=your_key_here
set AZURE_ENDPOINT=https://your-resource.openai.azure.com
The provider also accepts AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT as fallback variable names.

Constructor parameters

from logicore.providers.azure_provider import AzureProvider

provider = AzureProvider(
    model_name="gpt-4o-mini",
    endpoint="https://your-resource.openai.azure.com",
    api_key="...",
    model_type="openai"
)
model_name
string
required
The Azure deployment name — the name you chose when you deployed the model, not necessarily the raw model ID. For example, a deployment of gpt-4o might be named "my-gpt4o-deployment". This value is passed as model in every API call.
api_key
string
Your Azure API key. If omitted, the provider reads AZURE_API_KEY then AZURE_OPENAI_API_KEY from the environment. Raises ValueError if neither is set.
endpoint
string
Your Azure resource endpoint URL, e.g. "https://your-resource.openai.azure.com". If omitted, the provider reads AZURE_ENDPOINT then AZURE_OPENAI_ENDPOINT. Raises ValueError if not found.
api_version
string
The Azure API version string. If omitted, a sensible default is applied per deployment type:
  • OpenAI: "2024-10-21"
  • Anthropic: "2023-06-01"
  • Inference (MaaS): "2024-05-01-preview"
model_type
string
Explicit deployment type: "openai", "anthropic", or "inference". When omitted, the provider auto-detects based on the endpoint URL and deployment name using these heuristics:
  • Endpoint contains openai.azure.com or name contains gpt"openai"
  • Endpoint or name contains anthropic or claude"anthropic"
  • Endpoint ends in /v1 (non-Azure-OpenAI) or contains inference"inference"
  • Default fallback → "openai"
**kwargs
any
Extra keyword arguments stored internally. Not currently forwarded to the SDK client.

Basic usage

import asyncio
from logicore.agents.agent import Agent
from logicore.providers.azure_provider import AzureProvider

async def main():
    provider = AzureProvider(
        model_name="gpt-4o-mini",
        endpoint="https://your-resource.openai.azure.com",
        api_key="your_key_here",
        model_type="openai"
    )

    agent = Agent(
        llm=provider,
        role="Enterprise Assistant",
        system_message="Respond with concise, production-safe guidance."
    )

    result = await agent.chat("List 5 Azure governance best practices.")
    print(result)

asyncio.run(main())

Streaming

AzureProvider supports streaming for all deployment types. The Anthropic backend uses a thread-based queue to bridge the synchronous Anthropic stream API with asyncio:
import asyncio
from logicore.providers.azure_provider import AzureProvider

async def main():
    provider = AzureProvider(
        model_name="gpt-4o-mini",
        endpoint="https://your-resource.openai.azure.com",
        model_type="openai"
    )

    def on_token(token: str):
        print(token, end="", flush=True)

    result = await provider.chat_stream(
        messages=[
            {"role": "user", "content": "What is zero-trust networking?"}
        ],
        on_token=on_token
    )

    print()
    print("Tool calls:", result.tool_calls)

asyncio.run(main())

Tool calling

def check_compliance(resource_name: str, policy: str) -> str:
    """Check whether an Azure resource complies with a named policy."""
    return f"{resource_name} complies with {policy}: True"

agent = Agent(
    llm=AzureProvider(
        model_name="gpt-4o-mini",
        endpoint="https://your-resource.openai.azure.com",
        model_type="openai"
    ),
    tools=[check_compliance]
)

result = await agent.chat("Does storage-account-prod comply with ISO-27001?")
print(result)
Tool calling is fully supported for model_type="openai" and model_type="inference" deployments. For model_type="anthropic", the Anthropic tool-use format is used automatically, but the current implementation returns the text response only (tool results from Anthropic calls are extracted from content text blocks).

Vision / multimodal

Vision is available on Azure OpenAI deployments that use GPT-4o family models:
import asyncio
from logicore.agents.agent import Agent
from logicore.providers.azure_provider import AzureProvider

async def main():
    agent = Agent(
        llm=AzureProvider(
            model_name="gpt-4o-mini",
            endpoint="https://your-resource.openai.azure.com",
            model_type="openai"
        )
    )

    message = [
        {"type": "text", "text": "Extract key details from this architecture diagram."},
        {"type": "image_url", "image_url": "/path/to/diagram.png"}
    ]

    result = await agent.chat(message)
    print(result)

asyncio.run(main())
Supported image_url values:
  • Local file path
  • https:// image URL
  • data:image/...;base64,... inline data
Vision support for Anthropic deployments requires Claude 3+ models. MaaS inference vision support depends on the specific model deployed.

Model type auto-detection

If model_type is omitted, AzureProvider applies the following heuristics in order:
1. Endpoint ends in /v1 AND is not openai.azure.com  →  "inference"
2. "anthropic" in endpoint OR "claude" in model_name  →  "anthropic"
3. "openai.azure.com" in endpoint OR "gpt" in model_name  →  "openai"
4. "/models" or "inference" in endpoint  →  "inference"
5. Default  →  "openai"
Set model_type explicitly to avoid ambiguity, especially for Foundry endpoints that may match multiple patterns.

Troubleshooting

Neither api_key, AZURE_API_KEY, nor AZURE_OPENAI_API_KEY is set. Retrieve the key from your Azure resource under Keys and Endpoint in the Azure Portal.
Neither endpoint, AZURE_ENDPOINT, nor AZURE_OPENAI_ENDPOINT is set. Copy the endpoint URL from your resource overview page in the Azure Portal.
model_name must match the deployment name you created, not the underlying model name. In the Azure OpenAI Studio, navigate to Deployments and copy the exact deployment name.
The anthropic package is required for Anthropic-type deployments. Install it with:
pip install anthropic
Then verify your endpoint matches the format provided in the Azure AI Foundry portal.
The key may be for a different Azure resource, or the resource’s access control settings may require an Azure AD token instead of a key. Verify that Key-based authentication is enabled on the resource, or configure AzureOpenAI with azure_ad_token_provider via a custom client in **kwargs.
If the provider initializes the wrong client, set model_type explicitly: model_type="openai", model_type="anthropic", or model_type="inference". This bypasses heuristic detection entirely.

Build docs developers (and LLMs) love