Overview
The DeerFlowClient provides methods to query and update configuration, manage skills, handle file uploads, and access agent-produced artifacts.
Models
list_models()
List available models from configuration.
result = client.list_models()
List of model info dictionaries. Human-readable model name.
Whether model supports extended thinking mode.
supports_reasoning_effort
Whether model supports reasoning effort parameter.
Example
result = client.list_models()
for model in result[ "models" ]:
print ( f " { model[ 'name' ] } : { model[ 'display_name' ] } " )
get_model(name)
Get a specific model’s configuration by name.
model = client.get_model( "gpt-4" )
if model:
print (model[ "supports_thinking" ])
Model info dict, or None if not found.
Skills
list_skills(enabled_only)
List available skills.
If True, only return enabled skills.
result = client.list_skills( enabled_only = True )
List of skill info dictionaries. Skill license (e.g., “MIT”).
Skill category (“public”, “custom”, etc.).
Whether skill is currently enabled.
get_skill(name)
Get a specific skill by name.
skill = client.get_skill( "web-search" )
if skill:
print ( f "Enabled: { skill[ 'enabled' ] } " )
update_skill(name, enabled)
Update a skill’s enabled status.
result = client.update_skill( "web-search" , enabled = False )
print ( f "Skill now: { result[ 'enabled' ] } " )
Calls reset_agent() internally - the agent will be recreated on next use.
Raises:
ValueError - If skill not found
OSError - If config file cannot be written
install_skill(skill_path)
Install a skill from a .skill archive (ZIP).
result = client.install_skill( "/path/to/my-skill.skill" )
print ( f "Installed: { result[ 'skill_name' ] } " )
Whether installation succeeded.
Name of the installed skill.
Installation status message.
Raises:
FileNotFoundError - If file does not exist
ValueError - If file is invalid or skill already exists
Memory
get_memory()
Get current memory data.
memory = client.get_memory()
print ( f "Version: { memory[ 'version' ] } " )
print ( f "Facts: { len (memory[ 'facts' ]) } " )
Memory data dict with version and facts keys.
reload_memory()
Reload memory data from file, forcing cache invalidation.
refreshed = client.reload_memory()
print ( f "Reloaded { len (refreshed[ 'facts' ]) } facts" )
get_memory_config()
Get memory system configuration.
config = client.get_memory_config()
print ( f "Enabled: { config[ 'enabled' ] } " )
print ( f "Max facts: { config[ 'max_facts' ] } " )
Whether memory system is enabled.
Path to memory storage file.
Debounce delay before writing updates.
Maximum number of facts to store.
fact_confidence_threshold
Minimum confidence for fact extraction.
Whether memory injection is enabled.
Maximum tokens for memory injection.
get_memory_status()
Get memory status: config + current data.
status = client.get_memory_status()
print (status[ "config" ][ "enabled" ])
print ( len (status[ "data" ][ "facts" ]))
MCP Configuration
get_mcp_config()
Get MCP server configurations.
result = client.get_mcp_config()
for server_name, config in result[ "mcp_servers" ].items():
print ( f " { server_name } : enabled= { config[ 'enabled' ] } " )
Dict mapping server name to config.
update_mcp_config(mcp_servers)
Update MCP server configurations.
Dict mapping server name to config dict. Each value should contain keys like enabled, type, command, args, env, url, etc.
result = client.update_mcp_config({
"github" : {
"enabled" : True ,
"type" : "stdio" ,
"command" : "npx" ,
"args" : [ "-y" , "@modelcontextprotocol/server-github" ]
}
})
Writes to extensions_config.json and calls reset_agent() - the agent will be recreated on next use.
Raises:
FileNotFoundError - If config file cannot be located
OSError - If config file cannot be written
File Uploads
upload_files(thread_id, files)
Upload local files into a thread’s uploads directory.
List of local file paths to upload.
result = client.upload_files( "my-thread" , [
"/path/to/document.pdf" ,
"/path/to/data.csv"
])
print ( f "Uploaded { len (result[ 'files' ]) } files" )
For PDF, PPT, Excel, and Word files, they are also converted to Markdown automatically.
Whether upload succeeded.
List of uploaded file info. Virtual path for agent (/mnt/user-data/uploads/…).
URL to download the file via Gateway API.
Markdown conversion filename (if applicable).
Virtual path for markdown file.
URL to download markdown via Gateway API.
Raises:
FileNotFoundError - If any file does not exist
list_uploads(thread_id)
List files in a thread’s uploads directory.
result = client.list_uploads( "my-thread" )
for file in result[ "files" ]:
print ( f " { file [ 'filename' ] } : { file [ 'size' ] } bytes" )
delete_upload(thread_id, filename)
Delete a file from a thread’s uploads directory.
result = client.delete_upload( "my-thread" , "document.pdf" )
print (result[ "message" ])
Raises:
FileNotFoundError - If file does not exist
PermissionError - If path traversal is detected
Artifacts
get_artifact(thread_id, path)
Read an artifact file produced by the agent.
Virtual path (e.g., “mnt/user-data/outputs/file.txt”).
content, mime_type = client.get_artifact(
"my-thread" ,
"mnt/user-data/outputs/result.json"
)
data = json.loads(content)
print ( f "MIME type: { mime_type } " )
Tuple of (file_bytes, mime_type).
Raises:
FileNotFoundError - If artifact does not exist
ValueError - If path is invalid
PermissionError - If path traversal is detected
Agent Management
reset_agent()
Force the internal agent to be recreated on the next call.
Use this after:
External changes to memory
Skill installations
Configuration updates that should be reflected in the system prompt or tool set
This is automatically called by update_skill() and update_mcp_config().
Complete Example
from src.client import DeerFlowClient
from pathlib import Path
# Initialize
client = DeerFlowClient()
# Query configuration
models = client.list_models()
print ( f "Available models: { [m[ 'name' ] for m in models[ 'models' ]] } " )
skills = client.list_skills( enabled_only = True )
print ( f "Enabled skills: { [s[ 'name' ] for s in skills[ 'skills' ]] } " )
# Memory status
status = client.get_memory_status()
print ( f "Memory enabled: { status[ 'config' ][ 'enabled' ] } " )
print ( f "Facts stored: { len (status[ 'data' ][ 'facts' ]) } " )
# Upload files
thread_id = "analysis-session"
result = client.upload_files(thread_id, [
Path( "data.csv" ),
Path( "report.pdf" )
])
print ( f "Uploaded: { [f[ 'filename' ] for f in result[ 'files' ]] } " )
# Run analysis
for event in client.stream(
"Analyze the data.csv file and create a summary" ,
thread_id = thread_id
):
if event.type == "messages-tuple" and event.data.get( "type" ) == "ai" :
if content := event.data.get( "content" ):
print ( f "AI: { content } " )
# Get artifact
try :
content, mime = client.get_artifact(
thread_id,
"mnt/user-data/outputs/summary.txt"
)
print ( f "Summary: { content.decode() } " )
except FileNotFoundError :
print ( "No summary generated" )
# List uploaded files
uploads = client.list_uploads(thread_id)
print ( f "Files in thread: { [f[ 'filename' ] for f in uploads[ 'files' ]] } " )
# Cleanup
client.delete_upload(thread_id, "data.csv" )
print ( "Cleaned up temporary files" )
See Also
Chat - For simple request/response
Streaming - For real-time events
Overview - For initialization options