Agent API Reference
Core agent classes for building AI agents with the Microsoft Agent Framework.
SupportsAgentRun
Protocol defining the interface that all agents must implement.
from agent_framework import SupportsAgentRun
Protocol Methods
Unique identifier for the agent
Human-readable name of the agent
Description of the agent’s purpose and capabilities
run()
Get a response from the agent.
await agent.run(
messages="Hello, how are you?",
stream=False,
session=None,
**kwargs
)
The message(s) to send to the agent. Can be a string, Message, list of Messages, or None.
Whether to stream the response. When True, returns a ResponseStream.
The conversation session associated with the message(s).
Additional keyword arguments passed to the agent.
return
Awaitable[AgentResponse] | ResponseStream[AgentResponseUpdate, AgentResponse]
When stream=False: An AgentResponse with the final result.When stream=True: A ResponseStream of AgentResponseUpdate items with get_final_response() for the final AgentResponse.
create_session()
Create a new conversation session.
session = agent.create_session()
Additional keyword arguments for session creation.
A new AgentSession instance.
get_session()
Get or create a session for a service-managed session ID.
session = agent.get_session(service_session_id="session-123")
The service-managed session ID.
Additional keyword arguments.
An AgentSession instance with service_session_id set.
Example: Custom Agent Implementation
from agent_framework import SupportsAgentRun, AgentResponse, AgentSession
class CustomAgent:
def __init__(self):
self.id = "custom-agent-001"
self.name = "Custom Agent"
self.description = "A fully custom agent implementation"
async def run(self, messages=None, *, stream=False, session=None, **kwargs):
if stream:
async def _stream():
from agent_framework import AgentResponseUpdate
yield AgentResponseUpdate()
return _stream()
else:
return AgentResponse(messages=[], response_id="custom-response")
def create_session(self, **kwargs):
return AgentSession(**kwargs)
def get_session(self, *, service_session_id, **kwargs):
return AgentSession(service_session_id=service_session_id, **kwargs)
# Verify protocol compatibility
assert isinstance(CustomAgent(), SupportsAgentRun)
BaseAgent
Base class for all Agent Framework agents without middleware or telemetry layers.
from agent_framework import BaseAgent
Constructor
class SimpleAgent(BaseAgent):
async def run(self, messages=None, *, stream=False, session=None, **kwargs):
# Implementation required
pass
agent = SimpleAgent(
id="agent-123",
name="my-agent",
description="A simple agent",
context_providers=[history_provider],
middleware=[logging_middleware]
)
The unique identifier of the agent. If None, a UUID will be generated.
The description of the agent.
context_providers
Sequence[BaseContextProvider] | None
Context providers to include during agent invocation.
middleware
Sequence[MiddlewareTypes] | None
List of middleware to intercept agent operations.
additional_properties
MutableMapping[str, Any] | None
Additional properties set on the agent.
Additional keyword arguments merged into additional_properties.
Methods
Convert the agent to a FunctionTool that can be used by other agents.
tool = agent.as_tool(
name="research_agent",
description="Performs research tasks",
propagate_session=True
)
The name for the tool. If None, uses the agent’s name.
The description for the tool. If None, uses the agent’s description.
The name of the function argument.
The description for the function argument.
stream_callback
Callable[[AgentResponseUpdate], None | Awaitable[None]] | None
Optional callback for streaming responses.
If True, forwards parent agent’s session to sub-agent’s run() call.
A FunctionTool that wraps this agent.
Agent
The main agent class for building AI agents with chat clients, tools, and instructions.
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
Constructor
client = OpenAIChatClient(model_id="gpt-4")
agent = Agent(
client=client,
instructions="You are a helpful assistant.",
name="assistant",
description="A helpful assistant",
tools=[get_weather],
default_options={
"temperature": 0.7,
"max_tokens": 500
}
)
client
SupportsChatGetResponse
required
The chat client to use for the agent.
Instructions for the agent. These are sent to the chat client as a system message.
The unique identifier for the agent. Will be created automatically if not provided.
A brief description of the agent’s purpose.
tools
ToolTypes | Callable | Sequence[ToolTypes | Callable] | None
The tools available to the agent. Can be FunctionTool, callable functions, or tool definitions.
Default chat options including temperature, max_tokens, model_id, etc. Can be overridden at runtime.
context_providers
Sequence[BaseContextProvider] | None
Context providers to include during agent invocation.
middleware
Sequence[MiddlewareTypes] | None
List of middleware to intercept agent and function invocations.
Additional keyword arguments stored as additional_properties.
run()
Run the agent with given messages and options.
# Non-streaming
response = await agent.run(
"What's the weather in Seattle?",
session=session,
options={"temperature": 0.5}
)
# Streaming
stream = agent.run(
"Tell me a story",
stream=True
)
async for update in stream:
print(update.text, end="")
final = await stream.get_final_response()
The messages to process. Can be string, Message, or list of Messages.
Whether to stream the response.
The session to use for the agent. If None, the run will be stateless.
tools
ToolTypes | Callable | Sequence[ToolTypes | Callable] | None
Tools for this specific run (merged with default tools).
Runtime chat options including temperature, max_tokens, response_format, etc.
Additional keyword arguments passed to functions that are called.
return
Awaitable[AgentResponse] | ResponseStream[AgentResponseUpdate, AgentResponse]
When stream=False: An Awaitable[AgentResponse] containing the agent’s response.When stream=True: A ResponseStream of AgentResponseUpdate items with get_final_response() for the final AgentResponse.
Example: Basic Usage
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
client = OpenAIChatClient(model_id="gpt-4")
agent = Agent(
client=client,
name="assistant",
description="A helpful assistant"
)
response = await agent.run("Hello, how are you?")
print(response.text)
from agent_framework import Agent, tool
@tool
def get_weather(location: str) -> str:
"""Get the weather for a location."""
return f"The weather in {location} is sunny."
agent = Agent(
client=client,
name="weather-agent",
instructions="You are a weather assistant.",
tools=[get_weather],
default_options={
"temperature": 0.7,
"max_tokens": 500
}
)
# Use streaming
stream = agent.run("What's the weather in Paris?", stream=True)
async for update in stream:
print(update.text, end="")
final = await stream.get_final_response()
Example: Typed Options for IDE Autocomplete
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient, OpenAIChatOptions
client = OpenAIChatClient(model_id="gpt-4o")
agent: Agent[OpenAIChatOptions] = Agent(
client=client,
name="reasoning-agent",
instructions="You are a reasoning assistant.",
default_options={
"temperature": 0.7,
"max_tokens": 500,
"reasoning_effort": "high" # OpenAI-specific, IDE autocompletes!
}
)
# Runtime options also get autocomplete
response = await agent.run(
"What is 25 * 47?",
options={
"temperature": 0.0,
"logprobs": True # IDE autocompletes OpenAI-specific options
}
)
Context Manager Support
# Agent supports async context manager for proper cleanup
async with agent:
response = await agent.run("Hello")
# MCP tools and clients are automatically cleaned up