Overview
Qwen-Agent provides a built-in web UI based on Gradio that makes it easy to deploy your agents with a chat interface. The WebUI class handles all the complexity of building a conversational interface, file uploads, and response streaming.
Quick Start
Here’s a minimal example to deploy an agent with a GUI:
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
# Create your agent
bot = Assistant(llm={'model': 'qwen-max'})
# Deploy with WebUI
WebUI(bot).run()
That’s it! This creates a full-featured chat interface accessible at http://localhost:7860.
Basic Configuration
Server Options
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
bot = Assistant(llm={'model': 'qwen-max'})
WebUI(bot).run(
server_name='0.0.0.0', # Allow external access
server_port=8080, # Custom port
share=True, # Create public Gradio link
concurrency_limit=10 # Max concurrent users
)
Chatbot Configuration
Customize the UI appearance and behavior (qwen_agent/gui/web_ui.py:32):
chatbot_config = {
'user.name': 'Alice',
'user.avatar': 'path/to/user/avatar.png',
'agent.avatar': 'path/to/agent/avatar.png',
'input.placeholder': 'Type your message here...',
'prompt.suggestions': [
'Tell me about your capabilities',
'Help me analyze data',
'Generate a report'
],
'verbose': False # Show detailed logs
}
bot = Assistant(llm={'model': 'qwen-max'})
WebUI(bot, chatbot_config=chatbot_config).run()
Complete Examples
Example 1: RAG Assistant with GUI
From the codebase (examples/assistant_rag.py:26):
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
def app_gui():
# Define the agent with RAG capabilities
bot = Assistant(
llm={'model': 'qwen-plus-latest'},
name='Assistant',
description='Use RAG to answer questions. Supports: PDF/Word/PPT/TXT/HTML.'
)
# Configure UI
chatbot_config = {
'prompt.suggestions': [
{'text': 'Introduce figure one'},
{'text': 'What does the second chapter say?'},
]
}
WebUI(bot, chatbot_config=chatbot_config).run()
if __name__ == '__main__':
app_gui()
Example 2: Code Interpreter with File Support
From the codebase (examples/react_data_analysis.py:85):
import os
from qwen_agent.agents import ReActChat
from qwen_agent.gui import WebUI
ROOT_RESOURCE = os.path.join(os.path.dirname(__file__), 'resource')
def app_gui():
bot = ReActChat(
llm={'model': 'qwen-max'},
function_list=['code_interpreter']
)
chatbot_config = {
'prompt.suggestions': [
{
'text': 'Analyze and draw a line chart',
'files': [os.path.join(ROOT_RESOURCE, 'stock_prices.csv')]
},
'Draw a line graph y=x^2'
]
}
WebUI(bot, chatbot_config=chatbot_config).run()
if __name__ == '__main__':
app_gui()
Example 3: Multi-Agent Router
From the codebase (examples/multi_agent_router.py:105):
from qwen_agent.agents import Assistant, ReActChat, Router
from qwen_agent.gui import WebUI
def init_agent_service():
llm_cfg = {'model': 'qwen-max'}
llm_cfg_vl = {'model': 'qwen-vl-max'}
# Vision agent
bot_vl = Assistant(
llm=llm_cfg_vl,
name='Multimodal Assistant',
description='Can understand image content'
)
# Tool agent
bot_tool = ReActChat(
llm=llm_cfg,
name='Tool Assistant',
description='Can use drawing tools and run code',
function_list=['image_gen', 'code_interpreter']
)
# Router
bot = Router(
llm=llm_cfg,
agents=[bot_vl, bot_tool]
)
return bot
def app_gui():
bot = init_agent_service()
chatbot_config = {'verbose': True}
WebUI(bot, chatbot_config=chatbot_config).run()
if __name__ == '__main__':
app_gui()
From the codebase (examples/assistant_add_custom_tool.py:97):
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
from qwen_agent.tools.base import BaseTool, register_tool
# Define custom tool
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
description = 'AI painting service'
parameters = [{
'name': 'prompt',
'type': 'string',
'description': 'Image description',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
# Implementation...
pass
def app_gui():
llm_cfg = {'model': 'qwen-max'}
system = 'You can draw pictures and process images'
bot = Assistant(
llm=llm_cfg,
name='AI Painting',
system_message=system,
function_list=['my_image_gen', 'code_interpreter']
)
chatbot_config = {
'prompt.suggestions': [
'Draw a cat picture',
'Draw a cute dachshund',
'Draw a landscape with lake, mountain, and trees'
]
}
WebUI(bot, chatbot_config=chatbot_config).run()
if __name__ == '__main__':
app_gui()
UI Features
File Upload
The WebUI automatically handles file uploads:
# Users can upload files through the UI
# Files are automatically included in messages
# Supported formats: PDF, Word, PPT, images, CSV, etc.
bot = Assistant(
llm={'model': 'qwen-max'},
function_list=['code_interpreter'] # Can process uploaded files
)
WebUI(bot).run()
Streaming Responses
Responses are automatically streamed for better UX:
# No configuration needed - streaming is automatic
bot = Assistant(llm={'model': 'qwen-max'})
WebUI(bot).run()
# Text appears token-by-token
# Tool calls are shown in real-time
# Images are displayed as they're generated
Prompt Suggestions
Provide quick-start prompts for users:
chatbot_config = {
'prompt.suggestions': [
# Simple text suggestions
'What can you do?',
'Help me write code',
# Suggestions with files
{
'text': 'Analyze this data',
'files': ['data.csv']
},
# Multiple files
{
'text': 'Compare these documents',
'files': ['doc1.pdf', 'doc2.pdf']
}
]
}
WebUI(bot, chatbot_config=chatbot_config).run()
Custom Avatars
Personalize the chat interface:
chatbot_config = {
'user.name': 'Alice',
'user.avatar': 'https://example.com/alice.png',
'agent.avatar': 'https://example.com/bot.png'
}
WebUI(bot, chatbot_config=chatbot_config).run()
Multi-Agent UI
WebUI works seamlessly with multi-agent systems:
GroupChat UI
From the codebase (examples/group_chat_demo.py:271):
from qwen_agent.agents import GroupChat
from qwen_agent.gui.gradio_dep import gr, mgr, ms
CFGS = {
'background': 'A collaborative team',
'agents': [
{'name': 'Agent1', 'description': '...', 'instructions': '...'},
{'name': 'Agent2', 'description': '...', 'instructions': '...'},
]
}
bot = GroupChat(agents=CFGS, llm={'model': 'qwen-max'})
# WebUI automatically handles:
# - Agent selection display
# - Turn-taking visualization
# - Multi-turn conversations
WebUI(bot).run()
Router UI
from qwen_agent.agents import Router
bot = Router(
llm={'model': 'qwen-max'},
agents=[agent1, agent2, agent3]
)
# UI shows which agent is responding
chatbot_config = {'verbose': True}
WebUI(bot, chatbot_config=chatbot_config).run()
Deployment Options
Local Deployment
# Default: localhost only
WebUI(bot).run()
# Accessible at: http://localhost:7860
Network Deployment
# Allow access from other machines
WebUI(bot).run(
server_name='0.0.0.0',
server_port=8080
)
# Accessible at: http://<your-ip>:8080
Public Deployment
# Create shareable public link (via Gradio)
WebUI(bot).run(share=True)
# Creates link like: https://xxxxx.gradio.live
Public links are temporary and expire after 72 hours. For production deployment, use a proper web server.
Production Deployment
For production, use a reverse proxy:
# app.py
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
bot = Assistant(llm={'model': 'qwen-max'})
app = WebUI(bot)
if __name__ == '__main__':
app.run(
server_name='127.0.0.1',
server_port=7860
)
Nginx configuration:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://127.0.0.1:7860;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Advanced Customization
Custom Theme
The WebUI uses Gradio themes (qwen_agent/gui/web_ui.py:95):
from qwen_agent.gui.gradio_dep import gr
# WebUI uses a default theme internally
# For advanced customization, you can use Gradio directly:
import gradio as gr
from qwen_agent import Agent
customTheme = gr.themes.Soft(
primary_hue=gr.themes.utils.colors.blue,
secondary_hue=gr.themes.utils.colors.pink,
)
# Build custom interface
# (See Gradio documentation for full control)
Conversation State
WebUI maintains conversation state automatically:
# Conversations persist across interactions
# No manual state management needed
bot = Assistant(llm={'model': 'qwen-max'})
WebUI(bot).run()
# User can have multi-turn conversations
# File uploads are remembered
# Tool states persist
Multiple Agents
Deploy multiple agents in one interface:
from qwen_agent.agents import Assistant
from qwen_agent.gui import WebUI
# WebUI accepts a list of agents
agents = [
Assistant(llm={'model': 'qwen-max'}, name='General Assistant'),
Assistant(llm={'model': 'qwen-max'}, name='Code Helper', function_list=['code_interpreter']),
Assistant(llm={'model': 'qwen-vl-max'}, name='Vision Assistant')
]
WebUI(agents).run()
# UI provides agent selection dropdown
Troubleshooting
Port Already in Use
OSError: [Errno 48] Address already in use
Solution: Use a different port:
WebUI(bot).run(server_port=8081)
Gradio Not Installed
ModuleNotFoundError: No module named 'gradio'
Solution: Install Gradio:
File Upload Not Working
# Ensure agent supports files
bot = Assistant(
llm={'model': 'qwen-max'},
function_list=['code_interpreter'] # Enables file handling
)
Slow Response
# Optimize by:
# 1. Using faster models
bot = Assistant(llm={'model': 'qwen-turbo'})
# 2. Reducing concurrency
WebUI(bot).run(concurrency_limit=5)
# 3. Limiting conversation history
# (automatic in WebUI)
Best Practices
UI Design Tips
- Provide clear prompt suggestions to guide users
- Use descriptive agent names and descriptions
- Enable verbose mode during development
- Test with various file types and sizes
- Set appropriate concurrency limits
- Use custom avatars for better UX
Production Considerations
- Don’t expose to public internet without authentication
- Set concurrency_limit to prevent resource exhaustion
- Monitor memory usage with large file uploads
- Use HTTPS in production (via reverse proxy)
- Implement rate limiting for public deployments
- Consider adding user authentication
Environment Variables
Configure via environment variables:
# API Keys
export DASHSCOPE_API_KEY="your_key"
# Model Configuration
export DEFAULT_MODEL="qwen-max"
# Working Directory
export M6_CODE_INTERPRETER_WORK_DIR="/custom/path"
Docker Deployment
Example Dockerfile:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
EXPOSE 7860
CMD ["python", "app.py"]
Run:
docker build -t qwen-agent-app .
docker run -p 7860:7860 -e DASHSCOPE_API_KEY=your_key qwen-agent-app
Next Steps