Skip to main content
This guide will help you install Flowise, start the server, and create your first working chatflow.

Prerequisites

Before you begin, ensure you have:
  • Node.js version 18.15.0 or higher (< 19.0.0 or version 20+)
  • npm package manager (comes with Node.js)
  • An API key from an LLM provider (OpenAI, Anthropic, etc.)
Download Node.js from nodejs.org if you don’t have it installed.

Install and Start Flowise

1

Install Flowise globally

Open your terminal and run:
npm install -g flowise
This installs the Flowise CLI globally on your system.
2

Start the Flowise server

Launch Flowise with a single command:
npx flowise start
You should see output indicating the server is running:
╔════════════════════════════════════════════╗
║              Flowise started               ║
╚════════════════════════════════════════════╝

➜ Local:   http://localhost:3000
The server starts on port 3000 by default. To use a different port, set the PORT environment variable: PORT=8080 npx flowise start
3

Open the Flowise UI

Navigate to http://localhost:3000 in your web browser.You’ll see the Flowise canvas interface where you can build workflows.

Create Your First Chatflow

Now let’s build a simple conversational chatbot.
1

Create a new chatflow

  1. Click the “Add New” button in the top navigation
  2. Select “Chatflow” from the options
  3. You’ll be taken to an empty canvas
Create new chatflow
2

Add a Chat Model node

  1. Click the ”+” button or search for “ChatOpenAI”
  2. Drag the ChatOpenAI node onto the canvas
  3. Click on the node to configure it
  4. Enter your OpenAI API key in the Connect Credential section
{
  "model": "gpt-3.5-turbo",
  "temperature": 0.7,
  "maxTokens": 1000
}
Keep your API keys secure. In production, use environment variables instead of hardcoding keys.
3

Add a Conversation Chain node

  1. Search for “Conversation Chain” in the node panel
  2. Drag it onto the canvas
  3. Connect the ChatOpenAI node output to the Conversation Chain node input
The connection should automatically snap when you drag from one node to another.
4

Configure memory (optional)

To give your chatbot memory of previous messages:
  1. Search for “Buffer Memory”
  2. Drag it onto the canvas
  3. Connect it to the Memory input of the Conversation Chain
Memory allows the chatbot to remember context from previous messages in the conversation.
5

Save and test your chatflow

  1. Click the “Save” button in the top right
  2. Give your chatflow a name (e.g., “My First Chatbot”)
  3. Click the “Chat” button to open the test interface
  4. Type a message and press Enter to chat with your AI!
Test chatflow interface

Example Chatflow Configuration

Here’s what your first chatflow should look like:
BufferMemory ──┐
               ├──> Conversation Chain
ChatOpenAI ────┘
Node Details:
  • Model Name: gpt-3.5-turbo or gpt-4
  • Temperature: 0.7 (controls randomness, 0-2)
  • Max Tokens: 1000 (maximum response length)
  • OpenAI API Key: Your API key from OpenAI
  • Language Model: Connected to ChatOpenAI node
  • Memory: Connected to Buffer Memory node (optional)
  • System Message: Optional custom system prompt
  • Memory Key: chat_history (default)
  • Session ID: Auto-generated per conversation

Use Your Chatflow via API

Once your chatflow is working, you can integrate it into your applications using the API.
1

Get the API endpoint

Click the “API” button in the chatflow toolbar to see the endpoint:
http://localhost:3000/api/v1/prediction/YOUR-CHATFLOW-ID
2

Make an API call

Use the endpoint in your application:
curl -X POST http://localhost:3000/api/v1/prediction/YOUR-CHATFLOW-ID \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Hello! What can you help me with?"
  }'

Next Steps

Congratulations! You’ve built your first AI chatflow with Flowise. Here’s what to explore next:

Add RAG capabilities

Upload documents and create a Q&A system that answers from your data.

Build an AI Agent

Create autonomous agents that can use tools and make decisions.

Explore Integrations

Connect to different LLM providers, vector databases, and tools.

Production Deployment

Learn how to deploy Flowise in production with Docker and databases.

Troubleshooting

If port 3000 is already in use, you can start Flowise on a different port:
PORT=8080 npx flowise start
Make sure your API key is valid and has the necessary permissions:
  • For OpenAI: Check your API key at platform.openai.com/api-keys
  • Ensure you have billing enabled and credits available
  • Verify the API key has access to the model you’re using
Flowise requires Node.js >= 18.15.0. Check your version:
node --version
If your version is too old, download the latest LTS version from nodejs.org.
If npm install -g flowise fails:
  1. Try with sudo (macOS/Linux): sudo npm install -g flowise
  2. Or install locally without -g flag and run with npx flowise start
  3. Clear npm cache: npm cache clean --force

Get Help

Join Discord

Get help from the community and Flowise team.

GitHub Discussions

Ask questions and share your workflows.

Build docs developers (and LLMs) love