Let’s create a simple assistant agent that responds to a task.
1
Create a Python file
Create a new file called hello_agent.py:
hello_agent.py
import asynciofrom autogen_agentchat.agents import AssistantAgentfrom autogen_ext.models.openai import OpenAIChatCompletionClientasync def main() -> None: # Create a model client model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create an assistant agent agent = AssistantAgent( name="assistant", model_client=model_client ) # Run the agent with a task result = await agent.run(task="Say 'Hello World!'") print(result) # Clean up await model_client.close()# Run the async functionasyncio.run(main())
Now let’s make it more interesting by giving the agent access to a tool. We’ll create a weather agent that can look up weather information.
1
Define a tool function
Create a new file called weather_agent.py:
weather_agent.py
import asynciofrom autogen_agentchat.agents import AssistantAgentfrom autogen_agentchat.ui import Consolefrom autogen_ext.models.openai import OpenAIChatCompletionClient# Define a tool as a simple Python functionasync def get_weather(city: str) -> str: """Get the current weather for a given city. Args: city: The name of the city to get weather for Returns: A string describing the weather """ # In a real application, you'd call a weather API here return f"The weather in {city} is 73 degrees and sunny."async def main() -> None: # Create a model client model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key="sk-..." # Optional if OPENAI_API_KEY is set ) # Create an agent with the weather tool agent = AssistantAgent( name="weather_agent", model_client=model_client, tools=[get_weather], # Pass the function directly system_message="You are a helpful weather assistant.", reflect_on_tool_use=True, # Agent reflects on tool results ) # Run with streaming output to the console await Console( agent.run_stream(task="What's the weather in New York?") ) # Clean up await model_client.close()asyncio.run(main())
2
Run the weather agent
Execute the script:
python weather_agent.py
You’ll see the agent:
Receive your question
Call the get_weather tool with city="New York"
Get the tool result
Formulate a natural language response
Output:
---------- user ----------What's the weather in New York?---------- weather_agent ----------[FunctionCall(id='call_...', arguments='{"city":"New York"}', name='get_weather')]---------- weather_agent ----------[FunctionExecutionResult(content='The weather in New York is 73 degrees and sunny.', call_id='call_...')]---------- weather_agent ----------The current weather in New York is 73 degrees and sunny.
AutoGen automatically:
Converts your Python function into a tool schema
Passes it to the LLM via function calling
Executes the function when the model requests it
Returns results back to the model for final response generation