Skip to main content

Quickstart

This guide will get you up and running with AutoGen in under 5 minutes. You’ll create a simple agent that can use tools to answer questions.

Prerequisites

Before you begin, ensure you have:
  • Python 3.10 or later installed
  • An OpenAI API key (or another supported LLM provider)
Don’t have an OpenAI API key? Get one at platform.openai.com. You can also use other providers like Azure OpenAI, Anthropic, or local models.

Installation

1

Install AutoGen packages

Install both AgentChat and the OpenAI extension:
pip install -U "autogen-agentchat" "autogen-ext[openai]"
This installs:
  • autogen-agentchat: High-level API for building agents
  • autogen-ext[openai]: OpenAI model client extension
2

Set your API key

Export your OpenAI API key as an environment variable:
export OPENAI_API_KEY="sk-..."
3

Verify installation

Check that packages are installed correctly:
python -c "import autogen_agentchat; import autogen_ext.models.openai; print('AutoGen installed successfully!')"

Hello World: Your First Agent

Let’s create a simple assistant agent that responds to a task.
1

Create a Python file

Create a new file called hello_agent.py:
hello_agent.py
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
    # Create a model client
    model_client = OpenAIChatCompletionClient(model="gpt-4o")
    
    # Create an assistant agent
    agent = AssistantAgent(
        name="assistant",
        model_client=model_client
    )
    
    # Run the agent with a task
    result = await agent.run(task="Say 'Hello World!'")
    print(result)
    
    # Clean up
    await model_client.close()

# Run the async function
asyncio.run(main())
2

Run your agent

Execute the script:
python hello_agent.py
You should see output similar to:
TaskResult(messages=[TextMessage(source='assistant', content='Hello World!')], stop_reason='Text message returned')
The agent uses OpenAI’s GPT-4o model by default. You can change this to gpt-4o-mini for faster/cheaper responses or gpt-4 for maximum quality.

Adding Tools to Your Agent

Now let’s make it more interesting by giving the agent access to a tool. We’ll create a weather agent that can look up weather information.
1

Define a tool function

Create a new file called weather_agent.py:
weather_agent.py
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

# Define a tool as a simple Python function
async def get_weather(city: str) -> str:
    """Get the current weather for a given city.
    
    Args:
        city: The name of the city to get weather for
        
    Returns:
        A string describing the weather
    """
    # In a real application, you'd call a weather API here
    return f"The weather in {city} is 73 degrees and sunny."

async def main() -> None:
    # Create a model client
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        # api_key="sk-..."  # Optional if OPENAI_API_KEY is set
    )
    
    # Create an agent with the weather tool
    agent = AssistantAgent(
        name="weather_agent",
        model_client=model_client,
        tools=[get_weather],  # Pass the function directly
        system_message="You are a helpful weather assistant.",
        reflect_on_tool_use=True,  # Agent reflects on tool results
    )
    
    # Run with streaming output to the console
    await Console(
        agent.run_stream(task="What's the weather in New York?")
    )
    
    # Clean up
    await model_client.close()

asyncio.run(main())
2

Run the weather agent

Execute the script:
python weather_agent.py
You’ll see the agent:
  1. Receive your question
  2. Call the get_weather tool with city="New York"
  3. Get the tool result
  4. Formulate a natural language response
Output:
---------- user ----------
What's the weather in New York?
---------- weather_agent ----------
[FunctionCall(id='call_...', arguments='{"city":"New York"}', name='get_weather')]
---------- weather_agent ----------
[FunctionExecutionResult(content='The weather in New York is 73 degrees and sunny.', call_id='call_...')]
---------- weather_agent ----------
The current weather in New York is 73 degrees and sunny.
AutoGen automatically:
  • Converts your Python function into a tool schema
  • Passes it to the LLM via function calling
  • Executes the function when the model requests it
  • Returns results back to the model for final response generation

Understanding the Code

Let’s break down the key components:

Model Client

model_client = OpenAIChatCompletionClient(model="gpt-4o")
The model client connects to your LLM provider. AutoGen supports many providers through extensions:
  • OpenAIChatCompletionClient for OpenAI
  • AzureOpenAIChatCompletionClient for Azure OpenAI
  • AnthropicClient for Claude
  • And more…

Assistant Agent

agent = AssistantAgent(
    name="weather_agent",
    model_client=model_client,
    tools=[get_weather],
    system_message="You are a helpful weather assistant.",
    reflect_on_tool_use=True,
)
The AssistantAgent is a pre-configured agent type that:
  • Uses an LLM to process messages
  • Can call tools when needed
  • Reflects on tool results when reflect_on_tool_use=True
  • Follows instructions in the system_message

Running the Agent

result = await agent.run(task="Your task here")
The run() method executes the agent and returns a TaskResult with:
  • All messages exchanged
  • The final stop reason
  • Usage statistics
For streaming output, use run_stream() with the Console helper:
await Console(agent.run_stream(task="Your task"))

Next Steps

Congratulations! You’ve created your first AutoGen agent. Here’s what to explore next:

Core Concepts

Understand agents, teams, tools, and the architecture

Multi-Agent Teams

Create teams of agents that collaborate on complex tasks

Advanced Tools

Integrate MCP servers, code execution, and custom tools

Example Gallery

Browse complete examples and use cases

Common Next Tasks

Use a Different Model Provider

Switch to Azure OpenAI:
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient

model_client = AzureOpenAIChatCompletionClient(
    azure_deployment="your-deployment",
    model="gpt-4o",
    api_version="2024-02-15-preview",
    azure_endpoint="https://your-resource.openai.azure.com",
)
Or use Anthropic Claude:
from autogen_ext.models.anthropic import AnthropicClient

model_client = AnthropicClient(
    model="claude-3-5-sonnet-20241022",
    api_key="your-anthropic-key",
)

Add Multiple Tools

Agents can use multiple tools:
async def calculate(expression: str) -> float:
    """Evaluate a mathematical expression."""
    return eval(expression)

async def search(query: str) -> str:
    """Search the web for information."""
    # Call your search API
    return f"Search results for: {query}"

agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[get_weather, calculate, search],  # Multiple tools
)

Create a Multi-Agent Team

Combine multiple agents:
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination

# Create specialized agents
writer = AssistantAgent("writer", model_client, system_message="You write content.")
reviewer = AssistantAgent("reviewer", model_client, system_message="You review content.")

# Create a team
team = RoundRobinGroupChat(
    participants=[writer, reviewer],
    termination_condition=TextMentionTermination("APPROVED"),
)

# Run the team
result = await team.run(task="Write a short poem about AI")
Teams are covered in detail in the AgentChat documentation.

Troubleshooting

Make sure you installed both packages:
pip install -U "autogen-agentchat" "autogen-ext[openai]"
If using a virtual environment, ensure it’s activated.
Verify your API key is set correctly:
echo $OPENAI_API_KEY  # Linux/Mac
echo %OPENAI_API_KEY%  # Windows CMD
Or pass it explicitly:
model_client = OpenAIChatCompletionClient(
    model="gpt-4o",
    api_key="sk-...",
)
AutoGen uses async/await throughout. If running in a Jupyter notebook, you can use await directly:
result = await agent.run(task="...")
In a Python script, wrap in asyncio.run():
asyncio.run(main())
If you hit rate limits:
  • Use a smaller/cheaper model like gpt-4o-mini
  • Add delays between requests
  • Check your OpenAI usage limits

Getting Help

If you run into issues:
Ready to learn more? Continue to Core Concepts to understand how AutoGen works under the hood.

Build docs developers (and LLMs) love