Skip to main content
Qwen-Agent provides several ready-to-use assistant examples that demonstrate different capabilities and use cases. These examples serve as templates for building your own assistants.

Basic RAG Assistant

The simplest assistant with Retrieval-Augmented Generation capabilities.

Code Example

assistant_rag.py
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI

def test():
    bot = Assistant(llm={'model': 'qwen-plus-latest'})
    messages = [{
        'role': 'user',
        'content': [
            {'text': '介绍图一'},
            {'file': 'https://arxiv.org/pdf/1706.03762.pdf'}
        ]
    }]
    for rsp in bot.run(messages):
        print(rsp)

def app_gui():
    # Define the agent
    bot = Assistant(
        llm={'model': 'qwen-plus-latest'},
        name='Assistant',
        description='使用RAG检索并回答,支持文件类型:PDF/Word/PPT/TXT/HTML。'
    )
    chatbot_config = {
        'prompt.suggestions': [
            {'text': '介绍图一'},
            {'text': '第二章第一句话是什么?'},
        ]
    }
    WebUI(bot, chatbot_config=chatbot_config).run()

if __name__ == '__main__':
    app_gui()

Features

Multi-Format Support

PDF, Word, PowerPoint, TXT, and HTML documents

Web UI

Beautiful Gradio-based interface

Automatic RAG

Built-in document retrieval and indexing

Streaming Responses

Real-time response generation

Usage

1

Run the Example

python examples/assistant_rag.py
2

Upload Documents

Click the file upload button in the web interface and select your document
3

Ask Questions

Type questions about the uploaded documents
For best results, ask questions in the same language as your documents.

Weather Bot Assistant

A more complex assistant demonstrating multiple tool integration.

Code Example

assistant_weather_bot.py
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
import os

ROOT_RESOURCE = os.path.join(os.path.dirname(__file__), 'resource')

def init_agent_service():
    llm_cfg = {'model': 'qwen-max'}
    system = (
        '你扮演一个天气预报助手,你具有查询天气和画图能力。'
        '你需要查询相应地区的天气,然后调用给你的画图工具绘制一张城市的图,'
        '并从给定的诗词文档中选一首相关的诗词来描述天气,不要说文档以外的诗词。'
    )

    tools = ['image_gen', 'amap_weather']
    bot = Assistant(
        llm=llm_cfg,
        name='天气预报助手',
        description='查询天气和画图',
        system_message=system,
        function_list=tools,
    )
    return bot

def app_gui():
    bot = init_agent_service()
    chatbot_config = {
        'prompt.suggestions': [
            '查询北京的天气',
            '画一张北京的图片',
            '画一张北京的图片,然后配上一首诗',
        ]
    }
    WebUI(bot, chatbot_config=chatbot_config).run()

if __name__ == '__main__':
    app_gui()

Features

  • Weather API Integration: Real-time weather data using AMap API
  • Image Generation: Creates city illustrations
  • Knowledge Base: Uses uploaded poetry documents for creative responses
  • Multi-Tool Coordination: Combines multiple tools in single response

How It Works

1

User Request

User asks about weather in a specific city
2

Tool Selection

Assistant decides which tools to use (weather API, image generation)
3

Tool Execution

  • Queries weather API for current conditions
  • Generates city image using image_gen tool
  • Searches knowledge base for relevant poetry
4

Response Generation

Combines all information into a creative, informative response

Custom Tool Assistant

Demonstrates how to create and register custom tools.

Code Example

assistant_add_custom_tool.py
import json
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
from qwen_agent.tools.base import BaseTool, register_tool

# Add a custom tool named my_image_gen
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
    description = (
        'AI painting (image generation) service, input text description, '
        'and return the image URL drawn based on text information.'
    )
    parameters = [{
        'name': 'prompt',
        'type': 'string',
        'description': 'Detailed description of the desired image content, in English',
        'required': True,
    }]

    def call(self, params: str, **kwargs) -> str:
        prompt = json5.loads(params)['prompt']
        prompt = urllib.parse.quote(prompt)
        return json.dumps(
            {'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
            ensure_ascii=False,
        )

def init_agent_service():
    llm_cfg = {'model': 'qwen-max'}
    system = (
        "According to the user's request, you first draw a picture and then "
        "automatically run code to download the picture and select an image "
        "operation from the given document to process the image"
    )

    tools = ['my_image_gen', 'code_interpreter']
    bot = Assistant(
        llm=llm_cfg,
        name='AI painting',
        description='AI painting service',
        system_message=system,
        function_list=tools,
        files=[os.path.join(ROOT_RESOURCE, 'doc.pdf')],
    )
    return bot

def app_gui():
    bot = init_agent_service()
    chatbot_config = {
        'prompt.suggestions': [
            '画一只猫的图片',
            '画一只可爱的小腊肠狗',
            '画一幅风景画,有湖有山有树',
        ]
    }
    WebUI(bot, chatbot_config=chatbot_config).run()

if __name__ == '__main__':
    app_gui()

Creating Custom Tools

Follow this pattern to create your own tools:
1

Define Tool Class

Create a class inheriting from BaseTool:
@register_tool('tool_name')
class MyTool(BaseTool):
    description = 'What this tool does'
    parameters = [...]  # Parameter definitions
2

Implement call() Method

Add the actual tool logic:
def call(self, params: str, **kwargs) -> str:
    # Parse parameters
    args = json5.loads(params)
    
    # Your tool logic here
    result = do_something(args)
    
    # Return JSON string
    return json.dumps(result, ensure_ascii=False)
3

Register with Agent

Add tool name to agent’s function_list:
bot = Assistant(
    function_list=['my_tool', 'other_tools'],
    ...
)

Parameter Definition

Define tool parameters following this schema:
parameters = [
    {
        'name': 'param_name',
        'type': 'string',  # or 'number', 'boolean', 'object', 'array'
        'description': 'Clear description for the LLM',
        'required': True,  # or False
        'enum': ['option1', 'option2']  # Optional: restrict to specific values
    },
    # More parameters...
]

Model-Specific Assistants

Qwen3 / Qwen3.5 Assistant

Optimized for latest Qwen models:
from qwen_agent.agents import Assistant

bot = Assistant(
    llm={
        'model': 'qwen-max',
        'generate_cfg': {
            'fncall_prompt_type': 'qwen',
            'max_retries': 3
        }
    },
    function_list=['code_interpreter', 'image_gen'],
    files=['document.pdf']
)

Qwen3-Coder Assistant

Specialized for code-related tasks:
bot = Assistant(
    llm={
        'model': 'qwen3-coder',
        'generate_cfg': {
            'fncall_prompt_type': 'qwen',
            'use_raw_api': True  # Use vLLM's built-in tool parsing
        }
    },
    function_list=['code_interpreter'],
    system_message='Expert programming assistant'
)

QwQ-32B Assistant

For reasoning-intensive tasks:
bot = Assistant(
    llm={
        'model': 'qwq-32b-preview',
        'generate_cfg': {
            'fncall_prompt_type': 'qwen',
        }
    },
    function_list=['calculator', 'code_interpreter'],
)
Supports:
  • Parallel function calls: Multiple tools executed simultaneously
  • Multi-step reasoning: Complex problem decomposition
  • Multi-turn tool use: Iterative tool calling

MCP Integration Assistant

Model Context Protocol integration for database access:
from qwen_agent.agents import Assistant

bot = Assistant(
    llm={'model': 'qwen-max'},
    function_list=['mcp_sqlite'],  # MCP server tool
    description='Database assistant using MCP'
)
MCP requires additional setup. See MCP documentation for details.

Configuration Options

Common LLM Parameters

llm_cfg = {
    'model': 'qwen-max',
    'model_server': 'dashscope',  # or custom URL
    'api_key': os.getenv('DASHSCOPE_API_KEY'),
    'generate_cfg': {
        'top_p': 0.8,
        'temperature': 0.7,
        'max_tokens': 2000,
        'max_retries': 3,
        'fncall_prompt_type': 'qwen',  # or 'nous'
    }
}

Assistant Parameters

bot = Assistant(
    llm=llm_cfg,
    name='Assistant Name',
    description='Brief description',
    system_message='Detailed system instructions',
    function_list=['tool1', 'tool2'],
    files=['doc1.pdf', 'doc2.txt'],
)

WebUI Configuration

chatbot_config = {
    'prompt.suggestions': [
        {'text': 'Example question 1'},
        {'text': 'Example question 2'},
    ],
    'verbose': True,
    'user_name': 'User',
    'agent_name': 'Assistant',
}

WebUI(bot, chatbot_config=chatbot_config).run(
    server_name='0.0.0.0',  # Allow external access
    server_port=7860,
    share=False,  # Set True to create public link
)

Built-in Tools

These tools are available out of the box:
Execute Python code in a sandboxed environment.Use cases: Data analysis, visualization, calculations
Generate images from text descriptions.Provider: DashScope image generation API
Get current weather information for Chinese cities.Provider: AMap (高德地图) API
Perform mathematical calculations.

Best Practices

Clear System Messages

Write specific, clear system messages that define the assistant’s role and capabilities

Appropriate Tools

Only include tools that are relevant to your assistant’s purpose

Error Handling

Configure max_retries and handle tool call failures gracefully

Context Management

Use files parameter for persistent knowledge, messages for conversation context

Troubleshooting

  • Check tool description is clear and relevant to query
  • Verify tool is in function_list
  • Try adding examples in system_message
  • Verify API key is set correctly
  • Check rate limits
  • Ensure model name is valid
  • Check file format is supported
  • Verify file size is reasonable
  • Ensure file path is accessible

Next Steps

Function Calling

Deep dive into function calling patterns

Multi-Agent Systems

Build collaborative agent systems

Build docs developers (and LLMs) love