Overview
Function calling enables LLMs to interact with external tools and APIs by generating structured function call requests. Qwen-Agent provides a robust function calling implementation with support for parallel execution, custom prompting, and seamless tool integration.How Function Calling Works
The function calling workflow involves several steps:Basic Usage
Enabling Function Calling
Function calling is automatically enabled when you provide tools to an agent:qwen_agent/agents/fncall_agent.py:73-108
Manual Function Calling
You can also use function calling directly with an LLM:qwen_agent/llm/base.py:118-176
Function Call Configuration
Generate Config Parameters
Enable parallel execution of multiple function calls in a single response.When enabled, the LLM can request multiple independent function calls simultaneously.
Control when functions are called:
'auto'- Model decides whether to call functions'none'- Disable function calling (functions still in context)function_name- Force a specific function call
Include the model’s reasoning in the content field alongside function calls.
Choose function calling prompt format:
'nous'- Nous Research format (default)'qwen'- Qwen-specific format
qwen_agent/llm/function_calling.py:25-39
Parallel Function Calls
Parallel function calling allows the LLM to request multiple independent function executions in a single response:Message Format for Parallel Calls
With parallel function calls, the response contains multiple assistant messages with function calls:qwen_agent/llm/function_calling.py:59-65
Function Calling Modes
Auto Mode (Default)
The LLM decides when to use functions based on the user’s request:None Mode
Disable function calling while keeping function context:qwen_agent/llm/base.py:201-209
Forced Mode
Force the LLM to call a specific function:Function Prompting
Qwen-Agent uses specialized prompt templates to guide function calling:Nous Format (Default)
The Nous format is compatible with most models:Qwen Format
Optimized for Qwen models:qwen_agent/llm/function_calling.py:27-39
Message Processing
Preprocessing
Before sending to the LLM, messages are preprocessed to inject function schemas:qwen_agent/llm/function_calling.py:41-66
Postprocessing
LLM outputs are parsed to extract function calls:qwen_agent/llm/function_calling.py:68-82
Tool Detection
Agents use_detect_tool to identify function calls in LLM responses:
qwen_agent/agent.py:239-259
Iterative Tool Use
Agents can use tools iteratively to solve complex tasks:MAX_LLM_CALL_PER_RUN iterations.
Source Reference: qwen_agent/agents/fncall_agent.py:73-108
Advanced Patterns
Conditional Function Calling
Custom Function Results
Tools can return multimodal results:qwen_agent/agent.py:205-210
Thought in Content
Include reasoning alongside function calls:Error Handling
Tool Errors
qwen_agent/agent.py:178-210
Best Practices
Function Descriptions
- Write clear, detailed function descriptions
- Specify parameter types and constraints precisely
- Include examples in descriptions when helpful
- Keep function names descriptive and unambiguous
Parallel Execution
- Enable parallel calls for independent operations
- Ensure tools are thread-safe if used in parallel
- Consider rate limits when parallelizing API calls
- Test parallel behavior thoroughly
Error Recovery
- Return informative error messages to the LLM
- Let the LLM try alternative approaches
- Use
ToolServiceErrorfor expected failures - Log errors for debugging
Performance
- Set appropriate tool timeouts
- Use
function_choice='none'for non-tool queries - Monitor iteration counts
- Cache expensive tool results
Debugging
Related Resources
Tools
Learn how to create and configure tools
Agents
Understand agent architecture and workflows
LLM Configuration
Configure LLM parameters for optimal function calling