Skip to main content
This guide helps AutoGen developers move to the Microsoft Agent Framework (AF) with minimal guesswork. The frameworks share similar multi-agent primitives but differ in default behaviors, tool handling, and orchestration patterns.
All migration samples are available in the autogen-migration samples directory.

Key Differences at a Glance

ConceptAutoGenAgent Framework
Agent CreationAssistantAgent(model_client=...)client.as_agent(...)
Tool DefinitionFunctionTool wrapper classes@tool decorator with auto-schema
Default BehaviorSingle-turn (max_tool_iterations=1)Multi-turn (continues automatically)
Session StateImplicit via conversation contextExplicit via agent.create_session()
Round RobinRoundRobinGroupChatSequentialBuilder
Group ChatSelectorGroupChatGroupChatBuilder
Swarm PatternSwarm with handoffsHandoffBuilder
Magentic OneMagenticOneGroupChatMagenticBuilder

API Mapping Reference

Basic Agent Creation

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

client = OpenAIChatCompletionClient(model="gpt-4.1-mini")
agent = AssistantAgent(
    name="assistant",
    model_client=client,
    system_message="You are a helpful assistant. Answer in one sentence.",
)

# Run agent (AutoGen maintains state internally)
result = await agent.run(task="What is the capital of France?")
print(result.messages[-1].to_text())
Critical Default Behavior Difference: AutoGen’s AssistantAgent is single-turn by default (max_tool_iterations=1), while AF’s Agent is multi-turn and continues tool execution automatically until completion.

Tools and Functions

from autogen_core.tools import FunctionTool

def get_weather(location: str) -> str:
    """Get the weather for a location.

    Args:
        location: The city name or location.

    Returns:
        A weather description.
    """
    return f"The weather in {location} is sunny and 72°F."

# Wrap function in FunctionTool
weather_tool = FunctionTool(
    func=get_weather,
    description="Get weather information for a location",
)

agent = AssistantAgent(
    name="assistant",
    model_client=client,
    tools=[weather_tool],
    system_message="Use available tools to answer questions.",
)
AutoGen uses FunctionTool wrappers; AF uses @tool decorators with automatic schema inference from Python type hints and docstrings.

Thread Management

# AutoGen maintains conversation state automatically
result1 = await agent.run(task="What's the weather in Seattle?")
result2 = await agent.run(task="How about Portland?")

# History persists across calls
print(f"Total messages: {len(result2.messages)}")

Streaming Responses

# AutoGen streaming via run_stream
async for message in agent.run_stream(task="Tell me about AI agents"):
    if hasattr(message, "content"):
        print(message.content, end="")

Orchestration Pattern Migrations

Round Robin Group Chat → Sequential Builder

AutoGen’s RoundRobinGroupChat executes agents in order. AF provides SequentialBuilder for the same pattern.
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination

researcher = AssistantAgent(
    name="researcher",
    model_client=client,
    system_message="Provide facts and data about the topic.",
)

writer = AssistantAgent(
    name="writer",
    model_client=client,
    system_message="Turn research into engaging content.",
)

editor = AssistantAgent(
    name="editor",
    model_client=client,
    system_message="Review and finalize. End with APPROVED if satisfied.",
)

team = RoundRobinGroupChat(
    participants=[researcher, writer, editor],
    termination_condition=TextMentionTermination("APPROVED"),
)

result = await team.run(task="Create a summary about electric vehicles")
For AutoGen patterns with multiple rounds, use AF’s WorkflowBuilder with cyclic edges:
from agent_framework import WorkflowBuilder, executor, AgentExecutorRequest

@executor
async def check_approval(response, context):
    last_message = response.full_conversation[-1]
    if "APPROVED" in last_message.text:
        await context.yield_output("Content approved.")
    else:
        # Loop back to researcher for another round
        await context.send_message(
            AgentExecutorRequest(
                messages=response.full_conversation,
                should_respond=True
            )
        )

workflow = (
    WorkflowBuilder(start_executor=researcher)
    .add_edge(researcher, writer)
    .add_edge(writer, editor)
    .add_edge(editor, check_approval)
    .add_edge(check_approval, researcher)  # Cycle for multiple rounds
    .build()
)

Selector Group Chat → Group Chat Builder

AutoGen’s SelectorGroupChat uses an LLM to choose the next speaker. AF’s GroupChatBuilder provides the same capability.
from autogen_agentchat.teams import SelectorGroupChat

researcher = AssistantAgent(name="researcher", ...)
analyst = AssistantAgent(name="analyst", ...)
writer = AssistantAgent(name="writer", ...)

team = SelectorGroupChat(
    participants=[researcher, analyst, writer],
    model_client=client,  # LLM selects next speaker
    termination_condition=MaxMessageTermination(10),
)

result = await team.run(task="Analyze market trends")

Swarm Pattern → Handoff Builder

AutoGen’s Swarm enables agent handoffs. AF’s HandoffBuilder provides structured handoff coordination.
from autogen_agentchat.teams import Swarm
from autogen_agentchat.conditions import HandoffTermination

triage_agent = AssistantAgent(
    name="triage",
    model_client=client,
    system_message=(
        "Analyze requests and hand off to specialists. "
        "Use TERMINATE when resolved."
    ),
    handoffs=["billing_agent", "technical_support", "user"],
)

billing_agent = AssistantAgent(
    name="billing_agent",
    model_client=client,
    system_message="Handle payment questions. Handoff to triage when done.",
    handoffs=["triage", "user"],
)

tech_support = AssistantAgent(
    name="technical_support",
    model_client=client,
    system_message="Handle technical issues. Handoff to triage when done.",
    handoffs=["triage", "user"],
)

termination = HandoffTermination(target="user") | TextMentionTermination("TERMINATE")
team = Swarm(
    participants=[triage_agent, billing_agent, tech_support],
    termination_condition=termination,
)

result = await team.run(task="I was charged twice")
AF’s HandoffBuilder supports human-in-the-loop patterns via HandoffAgentUserRequest for interactive workflows.

Magentic One → Magentic Builder

AutoGen’s MagenticOneGroupChat provides orchestrated multi-agent workflows. AF has a direct equivalent.
from autogen_agentchat.teams import MagenticOneGroupChat

orchestrator = AssistantAgent(name="orchestrator", ...)
web_surfer = AssistantAgent(name="web_surfer", ...)
file_handler = AssistantAgent(name="file_handler", ...)
coder = AssistantAgent(name="coder", ...)

team = MagenticOneGroupChat(
    participants=[web_surfer, file_handler, coder],
    model_client=client,
    max_turns=30,
)

result = await team.run(task="Research and summarize AI safety papers")

Agent as Tool Pattern

Both frameworks support using agents as tools for hierarchical agent patterns.
from autogen_core.tools import FunctionTool

async def research_assistant(query: str) -> str:
    """Research assistant that performs web searches."""
    agent = AssistantAgent(
        name="research_assistant",
        model_client=client,
        tools=[web_search_tool],
    )
    result = await agent.run(task=query)
    return result.messages[-1].content

# Wrap agent as tool
research_tool = FunctionTool(research_assistant, description="Research assistant")

# Use in parent agent
orchestrator = AssistantAgent(
    name="orchestrator",
    model_client=client,
    tools=[research_tool],
)

Breaking Changes and Deprecations

Important Breaking Changes
  1. Default Behavior: AutoGen agents are single-turn by default; AF agents are multi-turn. Adjust max_tool_iterations expectations.
  2. Thread Management: AutoGen maintains conversation state implicitly; AF requires explicit create_session().
  3. Tool Wrappers: AutoGen’s FunctionTool must be replaced with AF’s @tool decorator.
  4. Message Types: AutoGen uses ChatMessage/TextMessage; AF uses Message with .text, .role, and .author_name.
  5. Termination Conditions: AutoGen’s TerminationCondition classes must be converted to AF’s lambda-based or custom termination logic.
  6. Handoff Messages: AutoGen’s HandoffMessage must be replaced with AF’s HandoffAgentUserRequest for human-in-the-loop patterns.

Migration Checklist

1

Update agent construction

Replace AssistantAgent(model_client=...) with client.as_agent(...)
2

Convert tools

Transform FunctionTool wrappers to @tool decorated functions
3

Add explicit sessions

Create sessions with agent.create_session() for stateful conversations
4

Adjust default behavior expectations

Account for AF’s multi-turn default (vs. AutoGen’s single-turn default)
5

Update orchestration patterns

  • RoundRobinGroupChatSequentialBuilder
  • SelectorGroupChatGroupChatBuilder
  • SwarmHandoffBuilder
  • MagenticOneGroupChatMagenticBuilder
6

Convert termination conditions

Replace AutoGen termination classes with AF’s lambda-based conditions
7

Test streaming behavior

Update streaming logic to handle AF’s AgentResponseUpdate events

Common Migration Patterns

Converting System Messages

agent = AssistantAgent(
    name="assistant",
    model_client=client,
    system_message="You are a helpful assistant.",
)

Converting Descriptions

agent = AssistantAgent(
    name="researcher",
    description="Collects background information",
    system_message="Gather concise facts.",
    model_client=client,
)

Converting Streaming

from autogen_agentchat.ui import Console

result = await Console(team.run_stream(task="Summarize the report"))

Additional Resources

Migration Samples

Runnable side-by-side code comparisons for all patterns

Agent Framework Docs

Complete documentation for the Agent Framework

AutoGen Docs

Official AutoGen documentation

Community Support

Get help from the community on Discord

Migration Tips

Run both implementations side-by-side during migration. Each sample script runs AutoGen first, then AF, so you can compare outputs in a single execution.
  • Start with single-agent patterns before multi-agent orchestrations
  • Test default behavior early — AF’s multi-turn default can produce different results than AutoGen’s single-turn default
  • Use explicit sessions liberally during development; optimize session management after achieving parity
  • For group chats, AF’s GroupChatBuilder simplifies speaker selection — no need to manually track turn order
  • Handoff patterns are more structured in AF; use HandoffBuilder for clearer coordination logic
  • Tool approval is explicit in AF via approval_mode; always use "always_require" in production for actions

Installation Commands

pip install "autogen-agentchat autogen-ext[openai] agent-framework"
The --pre flag is needed during the Agent Framework preview period. Both SDKs can coexist in the same environment for gradual migration.

Build docs developers (and LLMs) love