The OpenAI Agents SDK is OpenAI’s official framework for building autonomous agents. It provides async-first architecture, native tool support, and seamless Model Context Protocol (MCP) integration for external tool access.
The Agent class defines an autonomous agent with instructions, tools, and optional model configuration.
from agents import Agent, function_tool@function_tooldef send_email(to: str, subject: str, body: str): """Send an email to a recipient.""" print(f"Sending email to {to}") return {"status": "success"}agent = Agent( name="Assistant", instructions="You are a helpful assistant", tools=[send_email],)
from agents import function_tool@function_tooldef calculate_sum(a: int, b: int) -> int: """Add two numbers together. Args: a: First number b: Second number Returns: Sum of a and b """ return a + bagent = Agent( name="Calculator", tools=[calculate_sum], instructions="Help users with math")
import resendfrom agents import function_toolimport osresend.api_key = os.getenv("RESEND_API_KEY")@function_tooldef send_email(to: str, subject: str, body: str): """Send an email using Resend API. Args: to: Recipient email address subject: Subject line body: HTML content of the email Returns: Dictionary with status and message ID """ params = { "from": "[email protected]", "to": [to], "subject": subject, "html": body, } try: response = resend.Emails.send(params) return {"status": "success", "message_id": response.get("id")} except Exception as e: return {"status": "error", "message": str(e)}
Run multiple agent queries with proper parameter handling:
async def run_query(agent: Agent, message: str): print("\n" + "-" * 40) print(f"Query: {message}") result = await Runner.run(starting_agent=agent, input=message) print(f"Result: {result.final_output}")async def main(): async with MCPServerStdio(...) as server: agent = Agent( name="GitHub Assistant", mcp_servers=[server], instructions=""" When using numeric parameters like per_page, do not include quotes as they should be numbers. """ ) # Run multiple queries queries = [ "List the most recent issue", "Show the latest commit", "Get repository stats", ] for query in queries: await run_query(agent, query)if __name__ == "__main__": asyncio.run(main())
# .env fileOPENAI_API_KEY=your_openai_api_key# Or for alternative providersNEBIUS_API_KEY=your_nebius_api_key# For MCP serversGITHUB_PERSONAL_ACCESS_TOKEN=your_github_token
from agents import set_tracing_disabledset_tracing_disabled(disabled=True)
Reduces noise in production logs.
Clear Tool Docstrings
The agent reads docstrings to understand tool usage:
@function_tooldef search(query: str, max_results: int = 10) -> list: """ Search for information. Args: query: Search query string max_results: Maximum number of results (default 10) Returns: List of search results """ ...
Handle MCP Lifecycle
Use async context managers for MCP servers:
async with MCPServerStdio(...) as server: # Server is connected agent = Agent(mcp_servers=[server]) result = await Runner.run(agent, "query")# Server automatically disconnected
Type Hints for Parameters
Include clear type hints in tool parameters:
@function_tooldef process_data( data: dict, # Clear type count: int, # Not string! enabled: bool) -> str: ...
Note: Numeric parameters should be typed as int or float, not strings.