Skip to main content

Overview

ReActChat is an agent that uses the ReAct (Reasoning and Acting) format for tool calling. Unlike the standard function calling format, ReAct makes the agent’s reasoning process explicit through “Thought”, “Action”, and “Observation” steps.
ReAct format is particularly useful when you need to understand the agent’s reasoning process or when working with models that benefit from explicit reasoning traces.

Key Features

Explicit Reasoning

See the agent’s thought process before each action

Step-by-Step Execution

Clear action-observation loops for debugging

Tool Integration

Use any tool with ReAct format

Interpretable

Easy to understand and debug agent behavior

ReAct Format

The ReAct format follows this structure:
Question: [User's question]
Thought: [Agent's reasoning about what to do]
Action: [Tool name to use]
Action Input: [Tool parameters]
Observation: [Tool result]
Thought: [Further reasoning based on observation]
... (repeat as needed)
Thought: I now know the final answer
Final Answer: [Response to user]

Constructor

from qwen_agent.agents import ReActChat

agent = ReActChat(
    function_list=['code_interpreter', 'web_search'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'},
    system_message='You are a helpful assistant.',
    name='ReActAgent',
    description='A reasoning agent',
    files=['./context.txt']
)

Parameters

function_list
list
List of tools available to the agent. Same format as FnCallAgent.
llm
dict | BaseChatModel
LLM configuration. ReAct works best with models that can follow structured formats.
system_message
str
default:"DEFAULT_SYSTEM_MESSAGE"
System message. The ReAct prompt will be appended to this.
name
str
Agent name for identification.
description
str
Agent description for multi-agent routing.
files
list
Initial files to load into agent memory.

Basic Usage

1

Create a ReActChat Agent

from qwen_agent.agents import ReActChat

agent = ReActChat(
    function_list=['code_interpreter'],
    llm={
        'model': 'qwen-max-latest',
        'model_type': 'qwen_dashscope',
        'api_key': 'YOUR_API_KEY'
    }
)
2

Run with Visible Reasoning

messages = [{
    'role': 'user',
    'content': 'What is 15 factorial divided by 120?'
}]

for response in agent.run(messages=messages):
    # Response includes thought traces
    print(response[-1]['content'])

Example with Reasoning Traces

from qwen_agent.agents import ReActChat
from qwen_agent.tools.base import BaseTool, register_tool
import json5

@register_tool('get_weather')
class WeatherTool(BaseTool):
    description = 'Get current weather for a location'
    parameters = [{
        'name': 'location',
        'type': 'string',
        'description': 'City name',
        'required': True
    }]
    
    def call(self, params: str, **kwargs) -> str:
        location = json5.loads(params)['location']
        return json5.dumps({
            'location': location,
            'temperature': 72,
            'condition': 'sunny'
        })

agent = ReActChat(
    function_list=['get_weather'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'}
)

messages = [{
    'role': 'user',
    'content': 'What should I wear in San Francisco today?'
}]

for response in agent.run(messages=messages):
    print(response[-1]['content'])
Example Output:
Thought: I need to check the current weather in San Francisco first
Action: get_weather
Action Input: {"location": "San Francisco"}
Observation: {"location": "San Francisco", "temperature": 72, "condition": "sunny"}
Thought: I now know the final answer
Final Answer: Since it's sunny and 72°F in San Francisco today, I'd recommend wearing light layers - perhaps a t-shirt with a light jacket that you can take off if it gets warmer.

Multi-Step Reasoning

ReAct excels at multi-step problems:
from qwen_agent.agents import ReActChat

agent = ReActChat(
    function_list=['code_interpreter', 'web_search'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'}
)

messages = [{
    'role': 'user',
    'content': 'Find the population of Tokyo and calculate what 10% of that would be'
}]

for response in agent.run(messages=messages):
    print(response[-1]['content'])

# Output will show:
# Thought: I need to search for Tokyo's population
# Action: web_search
# Action Input: {"query": "Tokyo population"}
# Observation: [search results]
# Thought: Now I need to calculate 10% of that number
# Action: code_interpreter
# Action Input: {"code": "population = 13960000\nresult = population * 0.1\nprint(result)"}
# Observation: 1396000
# Thought: I now know the final answer
# Final Answer: Tokyo's population is approximately 13.96 million, so 10% of that would be about 1.396 million people.

With File Context

from qwen_agent.agents import ReActChat

agent = ReActChat(
    function_list=['code_interpreter'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'},
    files=['./sales_data.csv']
)

messages = [{
    'role': 'user',
    'content': 'Analyze the sales data and tell me which product performed best'
}]

for response in agent.run(messages=messages):
    print(response[-1]['content'])

Debugging with ReAct

The explicit reasoning makes debugging much easier:
from qwen_agent.agents import ReActChat

agent = ReActChat(
    function_list=['calculator', 'web_search'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'}
)

messages = [{'role': 'user', 'content': 'Complex query...'}]

for response in agent.run(messages=messages):
    content = response[-1]['content']
    
    # Extract reasoning steps
    if 'Thought:' in content:
        thoughts = [line for line in content.split('\n') if line.startswith('Thought:')]
        print('Reasoning steps:', thoughts)
    
    # Check which tools were used
    if 'Action:' in content:
        actions = [line for line in content.split('\n') if line.startswith('Action:')]
        print('Tools used:', actions)

Customizing the ReAct Prompt

You can customize the ReAct format by modifying the system message:
custom_react_prompt = """Answer questions using these tools: {tool_names}

Format:
Question: [question]
Thinking: [your reasoning]
Tool: [tool to use]
Input: [tool input]
Result: [tool output]
Thinking: [more reasoning]
Answer: [final answer]

Begin!
"""

agent = ReActChat(
    function_list=['calculator'],
    llm={'model': 'qwen-max-latest', 'model_type': 'qwen_dashscope'},
    system_message=custom_react_prompt
)

Best Practices

  • When you need to understand the agent’s reasoning
  • For debugging complex multi-step problems
  • When transparency is important
  • For educational purposes or demos
  • When working with reasoning-focused models
  • When you need maximum speed (FnCallAgent is faster)
  • For simple single-tool tasks
  • When parallel tool execution is needed
  • In production where reasoning traces aren’t needed
  • Keep tool descriptions clear and concise
  • Return structured, parseable results
  • Include relevant details in observations
  • Design tools to work sequentially

Comparison with FnCallAgent

FeatureReActChatFnCallAgent
Reasoning Traces✅ Explicit❌ Hidden
FormatText-basedJSON function calls
Parallel Tools
SpeedSlowerFaster
DebuggingEasierHarder
Best ForReasoning tasksProduction use

FnCallAgent

Standard function calling agent

Assistant

RAG + function calling agent

Custom Tools

Create tools for ReAct

API Reference

Complete API documentation

Build docs developers (and LLMs) love