Skip to main content
Flowise Assistants allow you to create sophisticated AI agents with specific instructions, access to tools, and the ability to process files. Choose between custom assistants with your preferred LLMs or OpenAI’s Assistant API for enhanced capabilities.

Overview

Flowise supports two types of assistants:
  • Custom Assistant: Build using your choice of LLMs and tools
  • OpenAI Assistant: Leverage OpenAI’s Assistant API with built-in features
  • Azure Assistant: Coming soon for Azure OpenAI deployments

Assistant Types

Custom Assistant

Create assistants using any supported LLM and customize every aspect: Features:
  • Choose from multiple LLM providers (OpenAI, Anthropic, Cohere, etc.)
  • Full control over prompt engineering
  • Custom tool integration
  • Flexible conversation memory
  • Cost-effective with open-source models
Use Cases:
  • Domain-specific assistants
  • Multi-provider deployments
  • Custom business logic
  • Budget-conscious applications

OpenAI Assistant

Build assistants using OpenAI’s Assistant API: Features:
  • Built-in function calling
  • Code interpreter for data analysis
  • File search capabilities
  • Multi-turn conversations with context
  • Thread management
  • Vector store integration
Supported Models:
  • gpt-4o and gpt-4o-mini
  • gpt-4-turbo and gpt-4-turbo-preview
  • gpt-4
  • gpt-3.5-turbo
Use Cases:
  • Customer support with knowledge base
  • Code generation and debugging
  • Data analysis and visualization
  • Document processing and Q&A

Creating a Custom Assistant

2
  • Go to Assistants from the main menu
  • Click on the Custom Assistant card
  • 3
    Configure Assistant
    4
    Select LLM
    5
    Choose your language model:
    6
    Available LLMs:
    ├─ OpenAI (GPT-4, GPT-3.5)
    ├─ Anthropic (Claude)
    ├─ Google (PaLM, Gemini)
    ├─ Cohere
    ├─ HuggingFace
    └─ Local Models (Ollama, LM Studio)
    
    7
    Set Instructions
    8
    Define the assistant’s behavior and personality:
    9
    You are a helpful customer support assistant for TechCorp.
    
    Your responsibilities:
    - Answer questions about products and services
    - Help troubleshoot common issues
    - Escalate complex problems to human agents
    - Always be professional and empathetic
    
    Knowledge base:
    - Product catalog
    - Return policy (30 days)
    - Shipping information
    - Technical documentation
    
    10
    Add Tools
    11
    Equip your assistant with capabilities:
    12
  • Search Tools: Web search, document search
  • API Tools: REST API calls, database queries
  • Utility Tools: Calculator, date/time, code execution
  • Custom Tools: Your own function implementations
  • 13
    Configure Memory
    14
    Choose how the assistant remembers conversations:
    15
  • Buffer Memory: Store recent messages
  • Summary Memory: Summarize long conversations
  • Window Memory: Keep fixed number of messages
  • Entity Memory: Track important entities
  • 16
    Test Assistant
    17
    Use the preview panel to test your assistant:
    18
  • Send sample queries
  • Verify tool usage
  • Check response quality
  • Adjust configuration as needed
  • 19
    Save and Deploy
    20
    Save your assistant configuration and deploy it to your application.

    Creating an OpenAI Assistant

    2
  • Go to Assistants from the main menu
  • Click on the OpenAI Assistant card
  • 3
    Create New Assistant
    4
    Click Create New Assistant or load an existing one from OpenAI.
    5
    Configure Assistant Settings
    6
    Basic Information
    7
    {
      "name": "Support Bot",
      "description": "Helps customers with product questions",
      "model": "gpt-4.1",
      "instructions": "You are a friendly and knowledgeable support agent..."
    }
    
    8
    Enable Capabilities
    9
    Code Interpreter
    10
    Enable for data analysis and visualization:
    11
  • Upload CSV, JSON, or Excel files
  • Generate charts and graphs
  • Perform statistical analysis
  • Execute Python code
  • 12
    File Search
    13
    Enable for document Q&A:
    14
  • Upload PDFs, text files, or documents
  • Create vector stores automatically
  • Semantic search across documents
  • Citation support
  • 15
    Function Calling
    16
    Define custom functions:
    17
    {
      "name": "get_order_status",
      "description": "Retrieves the current status of a customer order",
      "parameters": {
        "type": "object",
        "properties": {
          "order_id": {
            "type": "string",
            "description": "The unique order identifier"
          }
        },
        "required": ["order_id"]
      }
    }
    
    18
    Add Files and Vector Stores
    19
    Upload Files
    20
  • Click Add Files in the assistant configuration
  • Select files from your computer
  • Choose file purpose:
    • assistants: For file search
    • code_interpreter: For code analysis
  • 21
    Configure Vector Store
    22
    For file search functionality:
    23
  • Create a new vector store or select existing one
  • Add files to the vector store
  • Configure chunking strategy
  • Attach vector store to assistant
  • 24
    Vector stores enable semantic search across your documents, providing accurate retrieval for the assistant.
    25
    Set Advanced Options
    26
    {
      "temperature": 0.7,
      "top_p": 1.0,
      "max_prompt_tokens": 4096,
      "max_completion_tokens": 2048,
      "metadata": {
        "version": "1.0",
        "department": "customer_support"
      }
    }
    
    27
    Test Assistant
    28
    Use the built-in chat interface to test:
    29
  • Send queries related to uploaded files
  • Test function calling
  • Verify code interpreter output
  • Check response quality
  • 30
    Save Assistant
    31
    Click Save to create or update the assistant in OpenAI.

    Using Assistants in Chatflows

    Integrate assistants into your Flowise chatflows:

    Custom Assistant

    1. Add Custom Assistant node to canvas
    2. Select your saved assistant
    3. Connect to chat interface
    4. Configure additional settings

    OpenAI Assistant

    1. Add OpenAI Assistant node to canvas
    2. Select assistant from your OpenAI account
    3. Configure thread management
    4. Connect to chat interface
    [User Input] → [OpenAI Assistant] → [Response]
    
                  [Vector Store / Tools]
    

    Thread Management (OpenAI Assistants)

    OpenAI Assistants use threads to maintain conversation context:

    Create Thread

    Threads are created automatically when a user starts chatting:
    import requests
    
    API_URL = "http://localhost:3000/api/v1/openai-assistants/thread"
    
    payload = {
        "assistantId": "asst_abc123"
    }
    
    response = requests.post(API_URL, json=payload)
    thread = response.json()
    print(f"Thread ID: {thread['id']}")
    

    Send Message

    Send messages to a thread:
    message_payload = {
        "threadId": thread['id'],
        "message": "What is your return policy?"
    }
    
    response = requests.post(
        f"{API_URL}/message",
        json=message_payload
    )
    print(response.json())
    

    Retrieve Thread History

    Get all messages in a thread:
    history = requests.get(f"{API_URL}/thread/{thread['id']}/messages")
    for message in history.json():
        print(f"{message['role']}: {message['content']}")
    

    Assistant API

    Manage assistants programmatically:

    Create Custom Assistant

    curl -X POST http://localhost:3000/api/v1/assistants/custom \
      -H "Authorization: Bearer <your_api_key>" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "Support Assistant",
        "instructions": "You are a helpful support agent",
        "llmConfig": {
          "model": "gpt-4",
          "temperature": 0.7
        },
        "tools": ["web_search", "calculator"]
      }'
    

    Create OpenAI Assistant

    curl -X POST http://localhost:3000/api/v1/assistants/openai \
      -H "Authorization: Bearer <your_api_key>" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "Code Helper",
        "model": "gpt-4.1",
        "instructions": "You help with coding questions",
        "tools": [{"type": "code_interpreter"}]
      }'
    

    List Assistants

    curl -X GET http://localhost:3000/api/v1/assistants \
      -H "Authorization: Bearer <your_api_key>"
    

    Delete Assistant

    curl -X DELETE http://localhost:3000/api/v1/assistants/{assistantId} \
      -H "Authorization: Bearer <your_api_key>"
    
    For complete API documentation, see the Assistants API Reference.

    Best Practices

    Instruction Writing

    Write clear, specific instructions:
    ✅ Good:
    "You are a technical support assistant for SaaS products.
    Always ask clarifying questions before troubleshooting.
    Provide step-by-step solutions.
    Escalate to human agents when users are frustrated."
    
    ❌ Bad:
    "You help with technical stuff."
    

    Tool Selection

    Choose tools that match your assistant’s purpose:
    • Customer Support: Knowledge base search, ticket creation
    • Sales: Product catalog, pricing calculator, CRM integration
    • Technical: Code execution, API documentation, debugging tools

    File Organization

    For file search assistants:
    • Use descriptive file names
    • Organize by topic or category
    • Keep files updated
    • Remove outdated information

    Performance Optimization

    • Use appropriate model for task complexity
    • Limit file sizes for faster processing
    • Cache frequent queries
    • Monitor token usage and costs

    Security

    • Never include sensitive data in instructions
    • Validate function call parameters
    • Use credential management for API keys
    • Implement rate limiting

    Troubleshooting

    Assistant Not Responding

    • Check API key validity
    • Verify model availability
    • Review error logs
    • Test with simple query

    Incorrect Function Calls

    • Review function definitions
    • Improve instruction clarity
    • Add examples to instructions
    • Validate parameter schemas

    File Search Not Working

    • Verify files are uploaded
    • Check vector store attachment
    • Ensure file format is supported
    • Review chunking configuration

    High Costs

    • Monitor token usage per request
    • Optimize instructions length
    • Use appropriate model tier
    • Implement caching strategies

    Build docs developers (and LLMs) love