Overview
Agent is the abstract base class for all agents in Qwen-Agent. An agent receives messages and provides responses using LLM or Tools. Different agents have distinct workflows for processing messages and generating responses.
Class Signature
from qwen_agent import Agent
class Agent(ABC):
def __init__(
self,
function_list: Optional[List[Union[str, Dict, BaseTool]]] = None,
llm: Optional[Union[dict, BaseChatModel]] = None,
system_message: Optional[str] = DEFAULT_SYSTEM_MESSAGE,
name: Optional[str] = None,
description: Optional[str] = None,
**kwargs
)
Constructor Parameters
function_list
List[Union[str, Dict, BaseTool]]
List of tools to be used by the agent. Can be:
- Tool name strings (e.g.,
'code_interpreter')
- Tool configuration dictionaries (e.g.,
{'name': 'code_interpreter', 'timeout': 10})
- Tool objects (e.g.,
CodeInterpreter())
llm
Union[dict, BaseChatModel]
LLM model configuration or LLM model object. For configuration, set:{
'model': 'qwen-max',
'api_key': 'your-api-key',
'model_server': 'dashscope' # or other model servers
}
system_message
str
default:"DEFAULT_SYSTEM_MESSAGE"
System message for LLM chat that defines the agent’s behavior and role
Name of the agent, used for multi-agent scenarios
Description of the agent’s capabilities, used for multi-agent routing
Methods
run
def run(
self,
messages: List[Union[Dict, Message]],
**kwargs
) -> Iterator[List[Message]]
Generates responses based on received messages (streaming).
messages
List[Union[Dict, Message]]
required
List of conversation messages
Generator yielding response messages
run_nonstream
def run_nonstream(
self,
messages: List[Union[Dict, Message]],
**kwargs
) -> List[Message]
Same as run() but returns the complete response directly instead of streaming.
messages
List[Union[Dict, Message]]
required
List of conversation messages
Complete response messages
_run (Abstract)
@abstractmethod
def _run(
self,
messages: List[Message],
lang: str = 'en',
**kwargs
) -> Iterator[List[Message]]
Implements the agent’s specific workflow. Each agent subclass must implement this method.
Processed conversation messages
Language for prompts (‘en’ or ‘zh’)
Protected Methods
_call_llm
def _call_llm(
self,
messages: List[Message],
functions: Optional[List[Dict]] = None,
stream: bool = True,
extra_generate_cfg: Optional[dict] = None,
) -> Iterator[List[Message]]
Calls the LLM with the current messages and available functions.
def _call_tool(
self,
tool_name: str,
tool_args: Union[str, dict] = '{}',
**kwargs
) -> Union[str, List[ContentItem]]
Executes a tool with the specified arguments.
Name of the tool to execute
tool_args
Union[str, dict]
default:"{}"
Tool parameters as JSON string or dictionary
return
Union[str, List[ContentItem]]
Tool execution result
Usage Example
from qwen_agent import Agent
from qwen_agent.llm.schema import Message
# Create a basic agent subclass
class BasicAgent(Agent):
def _run(self, messages, lang='en', **kwargs):
return self._call_llm(messages)
# Initialize agent
agent = BasicAgent(
llm={'model': 'qwen-max', 'api_key': 'your-api-key'},
system_message='You are a helpful assistant.'
)
# Use the agent
messages = [Message(role='user', content='Hello!')]
for response in agent.run(messages):
print(response[-1].content)
See Also