Skip to main content
Flowise provides extensive integration support for building powerful AI workflows. Connect with leading LLM providers, vector stores, embedding services, and data sources to create custom AI applications.

Integration Categories

LLM Providers

Connect with 25+ language model providers including OpenAI, Anthropic, and open-source models

Vector Stores

Store and retrieve embeddings with 20+ vector database integrations

Embeddings

Generate embeddings using providers like OpenAI, Cohere, and HuggingFace

Document Loaders

Load data from 40+ sources including PDFs, websites, databases, and APIs

LLM Providers

Flowise supports major AI providers:
  • OpenAI - GPT-4, GPT-3.5, and o-series models
  • Anthropic - Claude 3 and Claude 4 models with extended thinking
  • Google - Gemini and Vertex AI models
  • Azure OpenAI - Enterprise OpenAI deployments
  • AWS Bedrock - Amazon’s managed AI service
  • Ollama - Local open-source models
  • Groq - Ultra-fast inference

Vector Databases

Store embeddings in production-ready vector stores:
  • Pinecone - Fully managed vector database
  • Chroma - Open-source embedding database
  • Qdrant - High-performance vector search
  • Weaviate - AI-native database
  • Supabase - PostgreSQL with pgvector
  • Redis - In-memory vector search

Document Loaders

Load data from various sources:
  • PDF Files - Extract text from documents
  • Web Scraping - Cheerio, Puppeteer, Playwright
  • Cloud Storage - S3, Google Drive
  • Databases - Notion, Airtable, Confluence
  • APIs - REST endpoints, search APIs

Quick Start

Setting Up Credentials

Most integrations require API credentials:
  1. Navigate to the Credentials page in Flowise
  2. Click Add Credential
  3. Select your provider (e.g., OpenAI API, Pinecone API)
  4. Enter your API key and other required fields
  5. Save the credential

Using Integrations in Workflows

  1. Drag and drop nodes from the left sidebar
  2. Configure each node with required parameters
  3. Connect credentials by selecting from your saved credentials
  4. Link nodes together to create your workflow
  5. Test your integration

Configuration Examples

OpenAI Chat Model

// Node configuration
{
  modelName: "gpt-4o-mini",
  temperature: 0.9,
  maxTokens: 2000,
  streaming: true
}

Pinecone Vector Store

// Pinecone configuration
{
  pineconeIndex: "my-index",
  pineconeNamespace: "documents",
  topK: 4
}

PDF Document Loader

// PDF loader configuration
{
  usage: "perPage",
  legacyBuild: false,
  metadata: {
    source: "user-manual"
  }
}

Enterprise Features

Private Deployments

  • Self-hosted models with Ollama, LocalAI
  • VPC deployment with AWS Bedrock
  • Azure integration for enterprise customers

Security & Compliance

  • Credential encryption for API keys
  • Role-based access control
  • Audit logging for compliance

Need Help?

View Examples

Browse example workflows and use cases

Join Community

Get help from the Flowise community

Next Steps

LLM Providers

Explore all language model integrations

Vector Stores

Learn about vector database options

Embeddings

Configure embedding models

Document Loaders

Load data from multiple sources

Build docs developers (and LLMs) love