Skip to main content

Overview

Qwen-Agent provides a flexible Agent base class that you can extend to create custom agents with specialized behaviors. All agents inherit from the Agent class and implement their own _run method to define their workflow.

Understanding the Agent Base Class

The Agent class (qwen_agent/agent.py:31) provides the foundation for all agents:
from qwen_agent import Agent
from qwen_agent.llm.schema import Message
from typing import Iterator, List

class Agent(ABC):
    def __init__(self,
                 function_list=None,
                 llm=None,
                 system_message=None,
                 name=None,
                 description=None,
                 **kwargs):
        # Initialization

Key Components

  • function_list: List of tools the agent can use
  • llm: Language model configuration or instance
  • system_message: System prompt for the agent
  • name: Agent identifier (required for multi-agent systems)
  • description: Agent description (used by routers and orchestrators)

Creating a Basic Agent

1
Step 1: Import Required Classes
2
from qwen_agent import Agent
from qwen_agent.llm.schema import Message
from typing import Iterator, List
3
Step 2: Define Your Agent Class
4
Inherit from Agent and implement the _run method:
5
class MyCustomAgent(Agent):
    
    def _run(self, messages: List[Message], lang: str = 'en', **kwargs) -> Iterator[List[Message]]:
        """
        The core workflow of your agent.
        
        Args:
            messages: Conversation history
            lang: Language ('en' or 'zh')
            **kwargs: Additional parameters
            
        Yields:
            List[Message]: Agent responses
        """
        # Your custom logic here
        return self._call_llm(messages)
6
Step 3: Initialize and Use
7
# Configure the agent
llm_cfg = {'model': 'qwen-max'}
agent = MyCustomAgent(
    llm=llm_cfg,
    system_message="You are a helpful assistant.",
    name="MyAgent"
)

# Run the agent
messages = [{'role': 'user', 'content': 'Hello!'}]
for response in agent.run(messages):
    print(response)

Example: Creating a Specialized Agent

Here’s a complete example of a custom agent that adds domain-specific behavior:
from qwen_agent import Agent
from qwen_agent.llm.schema import Message
from typing import Iterator, List
import datetime

class DataAnalystAgent(Agent):
    """An agent specialized for data analysis tasks."""
    
    def __init__(self, llm, name="DataAnalyst", **kwargs):
        system_message = (
            "You are an expert data analyst. "
            "Always provide step-by-step analysis and "
            "explain your reasoning clearly."
        )
        super().__init__(
            llm=llm,
            system_message=system_message,
            name=name,
            description="Expert in data analysis and statistics",
            function_list=['code_interpreter'],
            **kwargs
        )
    
    def _run(self, messages: List[Message], lang: str = 'en', **kwargs) -> Iterator[List[Message]]:
        # Add timestamp to context
        enhanced_messages = messages.copy()
        enhanced_messages.insert(0, Message(
            role='system',
            content=f"Current date: {datetime.datetime.now().strftime('%Y-%m-%d')}"
        ))
        
        # Call LLM with enhanced context
        return self._call_llm(enhanced_messages)

# Usage
llm_cfg = {'model': 'qwen-max'}
agent = DataAnalystAgent(llm=llm_cfg)

messages = [
    {'role': 'user', 'content': 'Analyze this data: [1, 2, 3, 4, 5]'}
]

for response in agent.run(messages):
    print(response)

Using Built-in Agents

Qwen-Agent provides several pre-built agents you can use directly:
from qwen_agent.agents import BasicAgent

# Simplest agent - just LLM with no tools
agent = BasicAgent(llm={'model': 'qwen-max'})

Agent Methods Reference

Core Methods

run(messages, **kwargs)

Main method to run the agent (returns streaming response):
for response in agent.run(messages):
    print(response)

run_nonstream(messages, **kwargs)

Get complete response without streaming:
response = agent.run_nonstream(messages)
print(response)

_call_llm(messages, functions=None, stream=True)

Call the LLM from within your agent (qwen_agent/agent.py:150):
def _run(self, messages, **kwargs):
    return self._call_llm(messages, stream=True)

_call_tool(tool_name, tool_args, **kwargs)

Execute a tool from within your agent (qwen_agent/agent.py:178):
result = self._call_tool('code_interpreter', '{"code": "print(1+1)"}')

Best Practices

Agent Design Tips
  • Keep the _run method focused on workflow logic
  • Use system_message for agent personality and instructions
  • Set meaningful name and description for multi-agent scenarios
  • Leverage _call_llm and _call_tool for common operations
Common Pitfalls
  • Don’t call run() directly from _run() - this creates infinite recursion
  • Always yield/return an Iterator of Message lists from _run()
  • The messages parameter is already preprocessed - don’t modify the original

Advanced: Agent with Custom Tool Handling

from qwen_agent import Agent
from qwen_agent.llm.schema import Message
from typing import Iterator, List

class SmartAgent(Agent):
    """Agent with custom tool execution logic."""
    
    def _run(self, messages: List[Message], lang: str = 'en', **kwargs) -> Iterator[List[Message]]:
        # Get LLM response with tool calls
        for response in self._call_llm(
            messages,
            functions=[tool.function for tool in self.function_map.values()]
        ):
            for msg in response:
                # Detect if tool call is needed
                use_tool, tool_name, tool_args, text = self._detect_tool(msg)
                
                if use_tool:
                    # Execute the tool
                    tool_result = self._call_tool(tool_name, tool_args)
                    
                    # Add tool result to messages
                    messages.append(msg)
                    messages.append(Message(
                        role='function',
                        content=tool_result,
                        name=tool_name
                    ))
                    
                    # Continue conversation with tool result
                    yield from self._run(messages, lang=lang, **kwargs)
                    return
            
            yield response

Next Steps

Build docs developers (and LLMs) love