Overview
Flowise supports two types of assistants:- Custom Assistant: Build using your choice of LLMs and tools
- OpenAI Assistant: Leverage OpenAI’s Assistant API with built-in features
- Azure Assistant: Coming soon for Azure OpenAI deployments
Assistant Types
Custom Assistant
Create assistants using any supported LLM and customize every aspect: Features:- Choose from multiple LLM providers (OpenAI, Anthropic, Cohere, etc.)
- Full control over prompt engineering
- Custom tool integration
- Flexible conversation memory
- Cost-effective with open-source models
- Domain-specific assistants
- Multi-provider deployments
- Custom business logic
- Budget-conscious applications
OpenAI Assistant
Build assistants using OpenAI’s Assistant API: Features:- Built-in function calling
- Code interpreter for data analysis
- File search capabilities
- Multi-turn conversations with context
- Thread management
- Vector store integration
gpt-4oandgpt-4o-minigpt-4-turboandgpt-4-turbo-previewgpt-4gpt-3.5-turbo
- Customer support with knowledge base
- Code generation and debugging
- Data analysis and visualization
- Document processing and Q&A
Creating a Custom Assistant
Available LLMs:
├─ OpenAI (GPT-4, GPT-3.5)
├─ Anthropic (Claude)
├─ Google (PaLM, Gemini)
├─ Cohere
├─ HuggingFace
└─ Local Models (Ollama, LM Studio)
You are a helpful customer support assistant for TechCorp.
Your responsibilities:
- Answer questions about products and services
- Help troubleshoot common issues
- Escalate complex problems to human agents
- Always be professional and empathetic
Knowledge base:
- Product catalog
- Return policy (30 days)
- Shipping information
- Technical documentation
Creating an OpenAI Assistant
{
"name": "Support Bot",
"description": "Helps customers with product questions",
"model": "gpt-4.1",
"instructions": "You are a friendly and knowledgeable support agent..."
}
{
"name": "get_order_status",
"description": "Retrieves the current status of a customer order",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "The unique order identifier"
}
},
"required": ["order_id"]
}
}
assistants: For file searchcode_interpreter: For code analysis
Vector stores enable semantic search across your documents, providing accurate retrieval for the assistant.
{
"temperature": 0.7,
"top_p": 1.0,
"max_prompt_tokens": 4096,
"max_completion_tokens": 2048,
"metadata": {
"version": "1.0",
"department": "customer_support"
}
}
Using Assistants in Chatflows
Integrate assistants into your Flowise chatflows:Custom Assistant
- Add Custom Assistant node to canvas
- Select your saved assistant
- Connect to chat interface
- Configure additional settings
OpenAI Assistant
- Add OpenAI Assistant node to canvas
- Select assistant from your OpenAI account
- Configure thread management
- Connect to chat interface
Thread Management (OpenAI Assistants)
OpenAI Assistants use threads to maintain conversation context:Create Thread
Threads are created automatically when a user starts chatting:Send Message
Send messages to a thread:Retrieve Thread History
Get all messages in a thread:Assistant API
Manage assistants programmatically:Create Custom Assistant
Create OpenAI Assistant
List Assistants
Delete Assistant
For complete API documentation, see the Assistants API Reference.
Best Practices
Instruction Writing
Write clear, specific instructions:Tool Selection
Choose tools that match your assistant’s purpose:- Customer Support: Knowledge base search, ticket creation
- Sales: Product catalog, pricing calculator, CRM integration
- Technical: Code execution, API documentation, debugging tools
File Organization
For file search assistants:- Use descriptive file names
- Organize by topic or category
- Keep files updated
- Remove outdated information
Performance Optimization
- Use appropriate model for task complexity
- Limit file sizes for faster processing
- Cache frequent queries
- Monitor token usage and costs
Security
- Never include sensitive data in instructions
- Validate function call parameters
- Use credential management for API keys
- Implement rate limiting
Troubleshooting
Assistant Not Responding
- Check API key validity
- Verify model availability
- Review error logs
- Test with simple query
Incorrect Function Calls
- Review function definitions
- Improve instruction clarity
- Add examples to instructions
- Validate parameter schemas
File Search Not Working
- Verify files are uploaded
- Check vector store attachment
- Ensure file format is supported
- Review chunking configuration
High Costs
- Monitor token usage per request
- Optimize instructions length
- Use appropriate model tier
- Implement caching strategies
