Skip to main content

LLM Providers

Microsoft Agent Framework supports multiple LLM providers, giving you the flexibility to choose the best model for your use case. Whether you’re using Azure OpenAI, OpenAI, Anthropic, AWS Bedrock, or local models with Ollama, the framework provides a consistent API across all providers.

Supported Providers

Azure OpenAI

Enterprise-grade OpenAI models hosted on Azure

OpenAI

Direct integration with OpenAI’s API

Anthropic

Claude models via Anthropic’s API

Other Providers

AWS Bedrock, Ollama, GitHub Copilot, and more

Choosing a Provider

When selecting a provider, consider:
  • Enterprise requirements: Azure OpenAI offers enterprise-grade SLAs, compliance, and regional deployment
  • Model capabilities: Different providers offer different models with varying capabilities (reasoning, vision, function calling)
  • Cost: Pricing varies significantly between providers and models
  • Latency: Geographic proximity and infrastructure affect response times
  • Privacy: Some providers offer better data privacy guarantees
  • Local development: Ollama enables local development without API costs

Common Patterns

All providers follow a consistent pattern in the framework:

Chat Clients

Most providers offer a chat client for direct chat completion:
from agent_framework.azure import AzureOpenAIChatClient
from azure.identity import AzureCliCredential

client = AzureOpenAIChatClient(credential=AzureCliCredential())
agent = client.as_agent(
    instructions="You are a helpful assistant.",
    tools=[...]
)

Assistant/Agent Providers

Some providers offer persistent agents with managed state:
from agent_framework.azure import AzureAIProjectAgentProvider
from azure.identity.aio import AzureCliCredential

async with (
    AzureCliCredential() as credential,
    AzureAIProjectAgentProvider(credential=credential) as provider
):
    agent = await provider.create_agent(
        name="MyAgent",
        instructions="You are a helpful assistant.",
        tools=[...]
    )

Authentication

Authentication varies by provider:
ProviderPython.NET
Azure OpenAIAzureCliCredential(), API keysDefaultAzureCredential, API keys
OpenAIOPENAI_API_KEY env varAPI key in constructor
AnthropicANTHROPIC_API_KEY env varAPI key in constructor
AWS BedrockAWS credentials (access key/secret)AWS credentials
OllamaNo authentication (local)No authentication (local)
For production deployments, always use managed identities or secure credential management rather than API keys in code.

Configuration

Providers can be configured via:
  1. Environment variables - Most providers read from environment variables by default
  2. Constructor parameters - Explicit configuration overrides environment variables
  3. Settings classes - Pydantic settings for structured configuration (Python)

Environment Variables

Common environment variables:
# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o
AZURE_OPENAI_API_KEY=your-key  # Optional, use managed identity instead

# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_CHAT_MODEL_ID=gpt-4o

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL_ID=claude-sonnet-4-20250514

# AWS Bedrock
BEDROCK_CHAT_MODEL_ID=anthropic.claude-3-sonnet-20240229-v1:0
BEDROCK_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret

# Ollama
OLLAMA_ENDPOINT=http://localhost:11434
OLLAMA_MODEL_ID=llama3.2

Streaming Support

All providers support streaming responses:
async for chunk in agent.run(query, stream=True):
    if chunk.text:
        print(chunk.text, end="", flush=True)

Function Calling

Most providers support function calling (tool use):
from agent_framework import tool
from typing import Annotated

@tool(approval_mode="always_require")  # Require human approval
def get_weather(location: Annotated[str, "City name"]) -> str:
    """Get weather for a location."""
    return f"Weather in {location}: Sunny, 72°F"

agent = client.as_agent(
    instructions="You are a weather assistant.",
    tools=[get_weather]  # Pass tools to agent
)
Not all models support function calling. Check the provider documentation for model capabilities.

Model Capabilities

FeatureAzure OpenAIOpenAIAnthropicBedrockOllama
Function Calling⚠️ Limited
Vision⚠️ Some models
Streaming
Reasoning✅ o1/o3✅ o1/o3✅ Extended thinking⚠️ Model dependent
Code Interpreter✅ Assistants✅ Assistants
File Search✅ Assistants✅ Assistants

Next Steps

Explore the provider-specific documentation for detailed setup instructions, authentication patterns, and advanced features:

Azure OpenAI

Enterprise-grade models with Azure integration

OpenAI

Latest GPT models directly from OpenAI

Anthropic

Claude models with advanced capabilities

Other Providers

Bedrock, Ollama, GitHub Copilot, and more

Build docs developers (and LLMs) love