Skip to main content
Flowise supports a wide range of vector databases for storing and retrieving embeddings. Choose from cloud-hosted services, self-hosted solutions, or in-memory stores.

Cloud-Hosted Vector Databases

Pinecone

Fully managed vector database with serverless architecture

Supabase

PostgreSQL with pgvector extension

MongoDB Atlas

Vector search in MongoDB Atlas

Upstash

Serverless vector database with Redis compatibility

Zep Cloud

Managed vector store with memory features

Vectara

AI-powered search platform with vector storage

AWS Kendra

Enterprise search with vector support

Self-Hosted Vector Databases

Chroma

Open-source embedding database

Qdrant

High-performance vector search engine

Weaviate

AI-native vector database

Milvus

Scalable vector database for production

OpenSearch

Vector search with OpenSearch k-NN

Elasticsearch

Vector search in Elasticsearch

Redis

In-memory vector search with RediSearch

Postgres (pgvector)

PostgreSQL with vector extension

Couchbase

NoSQL with vector search capabilities

SingleStore

Distributed SQL with vector support

Zep

Self-hosted memory store for agents

Meilisearch

Fast search engine with vector support

Local & In-Memory Stores

FAISS

Facebook’s similarity search library

In-Memory

Simple in-memory vector store

Document Store

File-based document storage

Simple Store

Basic vector storage

Configuration Examples

Pinecone

// Pinecone configuration
{
  pineconeIndex: "my-index",
  pineconeNamespace: "documents",
  topK: 4,
  pineconeMetadataFilter: {
    "category": "technical-docs"
  }
}
Setup Steps:
  1. Create account at pinecone.io
  2. Create an index:
    # Index settings
    - Dimension: 1536 (for OpenAI embeddings)
    - Metric: cosine
    - Cloud: AWS / GCP / Azure
    
  3. Get API key from console
  4. Add credential in Flowise:
    • Credential Type: Pinecone API
    • API Key: your-api-key
Code Example:
// From Pinecone.ts
const client = new Pinecone({ apiKey: pineconeApiKey })
const pineconeIndex = client.Index(indexName)

const obj: PineconeStoreParams = {
  pineconeIndex,
  textKey: 'text',
  namespace: pineconeNamespace
}

const vectorStore = await PineconeStore.fromExistingIndex(
  embeddings,
  obj
)

Chroma

// Chroma configuration
{
  collectionName: "my-collection",
  chromaURL: "http://localhost:8000",
  topK: 4,
  chromaMetadataFilter: {
    "source": "documentation"
  }
}
Setup Steps:
  1. Install Chroma:
    pip install chromadb
    
  2. Start Chroma server:
    chroma run --host localhost --port 8000
    
  3. Or use Docker:
    docker pull chromadb/chroma
    docker run -p 8000:8000 chromadb/chroma
    
Configuration:
// Chroma with authentication
const obj = {
  collectionName: "documents",
  url: "http://localhost:8000",
  chromaCloudAPIKey: "optional-api-key",
  clientParams: {
    tenant: "default_tenant",
    database: "default_database"
  }
}

Qdrant

// Qdrant configuration
{
  qdrantCollection: "documents",
  qdrantServerUrl: "http://localhost:6333",
  topK: 4,
  qdrantFilter: {
    "must": [{
      "key": "category",
      "match": { "value": "technical" }
    }]
  }
}
Setup with Docker:
docker run -p 6333:6333 qdrant/qdrant

Supabase (PostgreSQL + pgvector)

// Supabase configuration
{
  supabaseUrl: "https://your-project.supabase.co",
  tableName: "documents",
  queryName: "match_documents",
  topK: 4,
  filter: {
    "category": "docs"
  }
}
Setup Steps:
  1. Create Supabase project at supabase.com
  2. Enable pgvector extension:
    CREATE EXTENSION IF NOT EXISTS vector;
    
  3. Create table:
    CREATE TABLE documents (
      id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
      content TEXT,
      metadata JSONB,
      embedding VECTOR(1536)
    );
    
  4. Create match function:
    CREATE FUNCTION match_documents(
      query_embedding VECTOR(1536),
      match_threshold FLOAT,
      match_count INT
    )
    RETURNS TABLE (
      id UUID,
      content TEXT,
      metadata JSONB,
      similarity FLOAT
    )
    AS $$
    BEGIN
      RETURN QUERY
      SELECT
        documents.id,
        documents.content,
        documents.metadata,
        1 - (documents.embedding <=> query_embedding) AS similarity
      FROM documents
      WHERE 1 - (documents.embedding <=> query_embedding) > match_threshold
      ORDER BY similarity DESC
      LIMIT match_count;
    END;
    $$ LANGUAGE plpgsql;
    

Weaviate

// Weaviate configuration
{
  weaviateScheme: "https",
  weaviateHost: "your-cluster.weaviate.network",
  weaviateIndex: "Documents",
  topK: 4,
  weaviateFilter: {
    "where": {
      "path": ["category"],
      "operator": "Equal",
      "valueText": "technical"
    }
  }
}

Redis

// Redis configuration
{
  redisUrl: "redis://localhost:6379",
  indexName: "documents",
  topK: 4,
  redisFilter: "@category:{technical}"
}
Setup RediSearch:
# Using Docker
docker run -d -p 6379:6379 redis/redis-stack-server

# Or install Redis Stack
brew install redis-stack

FAISS (Local)

// FAISS configuration
{
  basePath: "./faiss-store",
  topK: 4
}
FAISS stores vectors locally on disk. No server required.

Advanced Features

Metadata Filtering

Filter results by metadata fields:
// Pinecone filter
{
  pineconeMetadataFilter: {
    "category": { "$eq": "technical" },
    "date": { "$gte": "2024-01-01" }
  }
}

// Qdrant filter
{
  qdrantFilter: {
    "must": [
      { "key": "category", "match": { "value": "technical" } },
      { "key": "score", "range": { "gte": 0.8 } }
    ]
  }
}

// Supabase filter
{
  filter: {
    "category": "technical",
    "created_at": { "gte": "2024-01-01" }
  }
}

MMR (Maximum Marginal Relevance)

Improve result diversity:
// Enable MMR
{
  searchType: "mmr",
  fetchK: 20, // Fetch 20 results
  lambda: 0.5, // Balance relevance vs diversity
  topK: 4 // Return 4 diverse results
}

Record Manager

Prevent duplicate documents:
// Use Record Manager
{
  recordManager: recordManagerNode,
  // Cleanup modes:
  // - "incremental": Only add new docs
  // - "full": Remove old docs, add new ones
  cleanup: "incremental",
  sourceIdKey: "source"
}

File Upload Support

Allow per-chat document uploads:
// Enable file uploads
{
  fileUpload: true
  // Documents tagged with chatId
  // Automatically filtered per conversation
}

Namespaces & Collections

Organize vectors by namespace:
// Pinecone namespace
{
  pineconeNamespace: "user-docs"
}

// Chroma collection
{
  collectionName: "technical-docs"
}

// Qdrant collection
{
  qdrantCollection: "knowledge-base"
}

Performance Optimization

Index Configuration

Pinecone:
  • Use pod-based for predictable performance
  • Use serverless for variable workloads
  • Configure replicas for high availability
Qdrant:
# Optimize for speed
optimizers_config:
  default_segment_number: 5
  indexing_threshold: 20000
Chroma:
# Configure HNSW parameters
collection.modify(
  hnsw_configuration={
    "M": 16,
    "ef_construction": 100,
    "ef_search": 100
  }
)

Batch Operations

Upsert documents in batches:
// Automatic batching
{
  batchSize: 100, // Upsert 100 docs at a time
  concurrency: 5 // Process 5 batches in parallel
}

Vector Store Comparison

FeaturePineconeChromaQdrantSupabase
HostingCloudSelf/CloudSelf/CloudCloud
Open Source
ManagedOptionalOptional
Serverless
ScaleMillions+MillionsMillions+Millions
Filtering
PricePay-as-goFreeFree/CloudFree tier

Choosing a Vector Store

For Production

  • Pinecone - Easiest managed solution
  • Qdrant - Best self-hosted performance
  • Supabase - If using PostgreSQL already

For Development

  • Chroma - Simple, no configuration
  • FAISS - Local, no server needed
  • In-Memory - Quick prototyping

For Enterprise

  • Qdrant - On-premise deployment
  • Weaviate - Kubernetes-native
  • AWS Kendra - AWS integration

Troubleshooting

Connection Issues

# Test Chroma connection
curl http://localhost:8000/api/v1/heartbeat

# Test Qdrant connection
curl http://localhost:6333/collections

# Test Pinecone
curl https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
  -H "Api-Key: YOUR_API_KEY"

Dimension Mismatch

Ensure embedding dimensions match:
  • OpenAI text-embedding-ada-002: 1536
  • OpenAI text-embedding-3-small: 1536
  • OpenAI text-embedding-3-large: 3072
  • Cohere embed-english-v3.0: 1024

Upsert Failures

// Check document format
{
  pageContent: "text here",
  metadata: {
    source: "doc.pdf",
    page: 1
  }
}

Next Steps

Embeddings

Configure embedding models

Document Loaders

Load documents into vector stores

Build docs developers (and LLMs) love