Skip to main content

Introduction

The Flowise API provides programmatic access to all core functionality available in the Flowise GUI. You can create, manage, and execute chatflows, manage documents, and interact with your AI assistants.

Base URL

All API requests should be made to:
http://localhost:3000/api/v1
Replace localhost:3000 with your Flowise server’s host and port if different.

API Versioning

The Flowise API is versioned through the URL path. The current version is v1.
  • Current version: v1
  • Full endpoint format: http://localhost:3000/api/v1/{endpoint}

Response Format

All API responses are returned in JSON format with appropriate HTTP status codes.

Success Response

Successful requests return a 200-level status code with the requested data:
{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "name": "My Chatflow",
  "flowData": "{...}",
  "createdDate": "2026-03-04T12:00:00.000Z",
  "updatedDate": "2026-03-04T12:00:00.000Z"
}

Error Response

Error responses include an appropriate HTTP status code and error details:
{
  "error": "Chatflow not found. Please verify the chatflow ID."
}

Common HTTP Status Codes

Status CodeDescription
200Success - Request completed successfully
400Bad Request - Invalid input or malformed request
401Unauthorized - Authentication required or API key invalid
404Not Found - Resource does not exist
413Payload Too Large - Request payload exceeds size limits
422Validation Error - Request validation failed
500Internal Server Error - Server-side error occurred

Core API Resources

The Flowise API provides endpoints for the following resources:

Chatflows

Manage your AI chatflows and agent flows:
  • Create, update, and delete chatflows
  • List all chatflows
  • Retrieve specific chatflow details
  • Get chatflow by API key
Base endpoint: /api/v1/chatflows

Predictions

Interact with your chatflows to get AI-generated responses:
  • Send messages to chatflows
  • Upload files for processing
  • Stream responses in real-time
  • Manage conversation history
Base endpoint: /api/v1/prediction/{chatflowId}

Assistants

Manage OpenAI-compatible assistants:
  • Create and configure assistants
  • Update assistant settings
  • List and retrieve assistants
  • Delete assistants
Base endpoint: /api/v1/assistants

Document Store

Manage document stores for RAG (Retrieval Augmented Generation):
  • Create and manage document stores
  • Upsert documents and chunks
  • Query vector stores
  • Manage embeddings
Base endpoint: /api/v1/document-store

Chat Messages

Access and manage conversation history:
  • Retrieve chat messages
  • Filter by date, session, or type
  • Delete message history
Base endpoint: /api/v1/chatmessage/{chatflowId}

Tools

Manage custom tools for your agents:
  • Create custom tools
  • Update tool configurations
  • List available tools
  • Delete tools
Base endpoint: /api/v1/tools

Variables

Manage global and chatflow-specific variables:
  • Create and update variables
  • List all variables
  • Delete variables
Base endpoint: /api/v1/variables

Feedback

Collect and manage user feedback:
  • Create feedback entries
  • Retrieve feedback for chatflows
  • Update feedback status
Base endpoint: /api/v1/feedback

Making Your First Request

Here’s a simple example to check if your server is running:
curl http://localhost:3000/api/v1/ping
Expected response:
pong
The /ping endpoint does not require authentication and is useful for health checks.

Rate Limiting

Rate limiting may be applied to prediction endpoints depending on your server configuration. If you encounter rate limit errors, reduce your request frequency or contact your administrator.

File Uploads

Several endpoints support file uploads using multipart/form-data:
  • Prediction endpoint: Upload images, audio, documents
  • Document store: Upload documents for processing
  • Attachments: Upload files for chat context
curl -X POST http://localhost:3000/api/v1/prediction/{chatflowId} \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "question=Analyze this image" \
  -F "files=@/path/to/image.jpg"

Streaming Responses

For real-time streaming responses, use the streaming endpoint:
curl -X POST http://localhost:3000/api/v1/prediction/{chatflowId} \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Tell me a story",
    "streaming": true
  }'
When streaming is enabled, the response will be sent as Server-Sent Events (SSE).

Next Steps

Authentication

Learn how to authenticate your API requests

API Reference

Explore detailed endpoint documentation

Build docs developers (and LLMs) love