Overview
BaseChatModel is the abstract base class for all Large Language Model implementations in Qwen-Agent. It provides a unified interface for chat completions with support for function calling, streaming, and multimodal inputs.
Class Signature
Constructor Parameters
Configuration dictionary with the following options:
model(str): Model name/identifierapi_key(str): API key for the model servicemodel_server(str): Model server endpointmodel_type(str): Type of model (e.g., ‘qwen_dashscope’, ‘oai’)generate_cfg(dict): Generation configurationmax_retries(int): Maximum number of retries on failuremax_input_tokens(int): Maximum input context lengthseed(int): Random seed for generationtemperature(float): Sampling temperaturetop_p(float): Nucleus sampling parameter
cache_dir(str): Directory for caching responses
Properties
Whether the model supports multimodal input (images, audio, video)
Whether the model generates multimodal outputs beyond text
Whether the model supports audio input
Methods
chat
Input messages for the conversation
List of functions available for function calling (OpenAI format)
Whether to use streaming generation
Whether to stream chunked responses (deprecated)
False(recommended): Stream the full response every iterationTrue: Stream delta responses
Extra generation hyperparameters:
temperature(float): Sampling temperaturetop_p(float): Nucleus samplingmax_tokens(int): Maximum tokens to generatestop(List[str]): Stop sequencesfunction_choice(str): ‘auto’, ‘none’, or function name
Generated message list or iterator of message lists
quick_chat
User prompt
Model’s text response
Abstract Methods
Subclasses must implement:_chat_stream
_chat_no_stream
_chat_with_functions
Usage Example
Function Calling Example
Error Handling
See Also
- Qwen DashScope - Qwen models via DashScope
- OpenAI - OpenAI compatible models
- Qwen VL - Multimodal vision-language models