Skip to main content
POST
/
api
/
v1
/
prediction
/
{id}
Predict (Chat)
curl --request POST \
  --url https://api.example.com/api/v1/prediction/{id} \
  --header 'Content-Type: application/json' \
  --data '
{
  "question": "<string>",
  "chatId": "<string>",
  "streaming": true,
  "overrideConfig": {},
  "history": [
    {}
  ],
  "uploads": [
    {}
  ]
}
'
{
  "401": {},
  "403": {},
  "404": {},
  "412": {},
  "429": {},
  "500": {},
  "text": "<string>",
  "question": "<string>",
  "chatId": "<string>",
  "chatMessageId": "<string>",
  "sessionId": "<string>",
  "memoryType": "<string>",
  "followUpPrompts": [
    {}
  ],
  "sourceDocuments": [
    {}
  ]
}

Authentication

This endpoint can be accessed with an API key or without authentication if the chatflow is public.
This is the primary endpoint for interacting with deployed chatflows. It handles message processing, streaming responses, file uploads, and session management.

Path Parameters

id
string
required
The unique identifier (UUID) of the chatflow to send the message to

Request Body

question
string
required
The user’s input message or question to send to the chatflow
chatId
string
Session ID for maintaining conversation context. If not provided, a new session will be created automatically
streaming
boolean
default:"false"
Enable server-sent events (SSE) streaming for real-time response. Set to true for streaming responses
overrideConfig
object
Override chatflow configuration for this specific request
  • sessionId - Override the session ID
  • Other configuration parameters specific to your chatflow nodes
history
array
Array of previous message objects to provide context:
[
  {"role": "user", "content": "Hello"},
  {"role": "assistant", "content": "Hi! How can I help?"}
]
uploads
array
Array of file objects for file upload support (when chatflow supports file uploads)

File Uploads

For chatflows that support file uploads, use multipart/form-data encoding:
  • Include files in the files field
  • Other parameters can be included as form fields

Response (Non-Streaming)

text
string
The chatflow’s text response to the user’s question
question
string
The original question that was sent
chatId
string
Session ID for this conversation
chatMessageId
string
Unique identifier for this message exchange
sessionId
string
Session identifier (may be same as chatId)
memoryType
string
Type of memory being used (e.g., “bufferMemory”)
followUpPrompts
array
Suggested follow-up questions or prompts
sourceDocuments
array
Source documents used to generate the response (for RAG applications)

Response (Streaming)

When streaming: true, the response uses Server-Sent Events (SSE) format:
event: token
data: {"token": "Hello"}

event: token
data: {"token": " there"}

event: metadata
data: {"chatId": "...", "chatMessageId": "...", "sessionId": "..."}
Streaming responses send tokens incrementally as they’re generated, providing a better user experience for longer responses.

Example Request (Non-Streaming)

cURL
curl -X POST "https://your-flowise-instance.com/api/v1/prediction/123e4567-e89b-12d3-a456-426614174000" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "What are your business hours?",
    "chatId": "session-abc123"
  }'
JavaScript
const chatflowId = '123e4567-e89b-12d3-a456-426614174000';
const response = await fetch(`https://your-flowise-instance.com/api/v1/prediction/${chatflowId}`, {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    question: 'What are your business hours?',
    chatId: 'session-abc123'
  })
});
const result = await response.json();
Python
import requests

chatflow_id = '123e4567-e89b-12d3-a456-426614174000'
response = requests.post(
    f'https://your-flowise-instance.com/api/v1/prediction/{chatflow_id}',
    headers={
        'Authorization': 'Bearer YOUR_API_KEY',
        'Content-Type': 'application/json'
    },
    json={
        'question': 'What are your business hours?',
        'chatId': 'session-abc123'
    }
)
result = response.json()

Example Request (Streaming)

cURL
curl -X POST "https://your-flowise-instance.com/api/v1/prediction/123e4567-e89b-12d3-a456-426614174000" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -N \
  -d '{
    "question": "Tell me a story",
    "streaming": true
  }'
JavaScript
const chatflowId = '123e4567-e89b-12d3-a456-426614174000';
const response = await fetch(`https://your-flowise-instance.com/api/v1/prediction/${chatflowId}`, {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    question: 'Tell me a story',
    streaming: true
  })
});

// Handle streaming response
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  // Process SSE events
  console.log(chunk);
}
Python
import requests
import json

chatflow_id = '123e4567-e89b-12d3-a456-426614174000'
response = requests.post(
    f'https://your-flowise-instance.com/api/v1/prediction/{chatflow_id}',
    headers={
        'Authorization': 'Bearer YOUR_API_KEY',
        'Content-Type': 'application/json'
    },
    json={
        'question': 'Tell me a story',
        'streaming': True
    },
    stream=True
)

# Handle streaming response
for line in response.iter_lines():
    if line:
        decoded_line = line.decode('utf-8')
        if decoded_line.startswith('data: '):
            data = json.loads(decoded_line[6:])
            print(data.get('token', ''))

Example Response (Non-Streaming)

{
  "text": "Our business hours are Monday to Friday, 9 AM to 5 PM EST.",
  "question": "What are your business hours?",
  "chatId": "session-abc123",
  "chatMessageId": "msg-xyz789",
  "sessionId": "session-abc123",
  "memoryType": "bufferMemory",
  "followUpPrompts": [
    "What holidays are you closed?",
    "Do you have weekend hours?"
  ],
  "sourceDocuments": []
}

Example Request (With File Upload)

cURL
curl -X POST "https://your-flowise-instance.com/api/v1/prediction/123e4567-e89b-12d3-a456-426614174000" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "[email protected]" \
  -F "question=What does this document say?" \
  -F "chatId=session-abc123"
JavaScript
const formData = new FormData();
formData.append('files', fileInput.files[0]);
formData.append('question', 'What does this document say?');
formData.append('chatId', 'session-abc123');

const response = await fetch(`https://your-flowise-instance.com/api/v1/prediction/${chatflowId}`, {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY'
  },
  body: formData
});

Error Responses

401
error
Unauthorized - Invalid or missing API key for a private chatflow
403
error
Forbidden - Origin not allowed to access this chatflow
{
  "message": "This site is not allowed to access this chatbot"
}
This occurs when:
  • The request origin is not in the chatflow’s allowedOrigins list
  • CORS restrictions are enforced
404
error
Not Found - Chatflow with the specified ID does not exist
412
error
Precondition Failed - Required parameters missing
429
error
Too Many Requests - Rate limit exceeded for this chatflow
500
error
Internal Server Error - An error occurred while processing the prediction

Origin Restrictions

Chatflows can be configured with allowedOrigins in their chatbotConfig. If configured, only requests from those origins will be accepted. This is useful for restricting which websites can embed your chatbot.

Rate Limiting

Each chatflow can have custom rate limiting rules configured. If you exceed the rate limit, you’ll receive a 429 error. Check with your administrator for specific rate limit settings.

Streaming Availability

Not all chatflows support streaming. Streaming is disabled if:
  • The chatflow has post-processing enabled
  • The chatflow uses custom function ending nodes
  • The underlying model doesn’t support streaming
Agent flows (AGENTFLOW, MULTIAGENT) always support streaming.

Session Management

Use the same chatId across multiple requests to maintain conversation context. If you don’t provide a chatId, a new one will be generated automatically. Store this ID to continue the conversation later.

Build docs developers (and LLMs) love