Vector Stores
Vector stores (also called vector databases) enable semantic search by storing text as high-dimensional vectors. This allows you to find information based on meaning rather than exact keyword matches.What is a Vector Store?
A vector store:- Converts text to vectors: Uses embedding models to create numerical representations
- Stores vectors efficiently: Optimized for similarity search
- Performs semantic search: Finds similar content based on meaning
- Returns relevant documents: Retrieves the most similar items
Vector stores are essential for RAG (Retrieval Augmented Generation) applications, where you need to provide relevant context to language models.
Available Vector Stores
n8n supports the following vector stores:Pinecone
Fully managed, production-ready vector database
Qdrant
Open-source vector search engine
Supabase
Postgres-based vector storage
In-Memory
Local storage for development
Weaviate
AI-native vector database
Chroma
Embedding database for AI apps
Redis
Redis with vector search
PGVector
PostgreSQL extension
MongoDB Atlas
MongoDB with vector search
Milvus
Cloud-native vector database
Zep
Memory-optimized vector store
Azure AI Search
Microsoft Azure cognitive search
Common Operations
All vector store nodes support these operations:- Insert: Add documents to the vector store
- Load: Load documents from existing index
- Retrieve: Search for similar documents
- Update: Update existing documents
- Retrieve as Tool: Use as an agent tool
Pinecone Vector Store
Node:@n8n/n8n-nodes-langchain.vectorStorePinecone
Source Reference: /home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStorePinecone/VectorStorePinecone.node.ts:54
Pinecone is a fully managed vector database optimized for production use.
Setup
Create Pinecone Account
Sign up at pinecone.io
Create an Index
Create a new index with the appropriate dimensions for your embedding model:
- OpenAI text-embedding-3-small: 1536 dimensions
- OpenAI text-embedding-ada-002: 1536 dimensions
- Cohere embed-english-v3.0: 1024 dimensions
Configuration
Namespaces
From the source code:Example: Insert Documents
Qdrant Vector Store
Node:@n8n/n8n-nodes-langchain.vectorStoreQdrant
Source Reference: /home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStoreQdrant/VectorStoreQdrant.node.ts:97
Qdrant is an open-source vector search engine with excellent performance.
Configuration
Payload Keys
From the source code:Search Filters
Qdrant supports powerful filtering:Supabase Vector Store
Node:@n8n/n8n-nodes-langchain.vectorStoreSupabase
PostgreSQL-based vector storage using pgvector extension.
Setup
Configuration
In-Memory Vector Store
Node:@n8n/n8n-nodes-langchain.vectorStoreInMemory
Local storage for development and testing.
In-Memory vector store is not persistent and will be lost when the workflow stops. Use only for development.
Insert Mode
Load Mode
Other Vector Stores
Weaviate
Node:@n8n/n8n-nodes-langchain.vectorStoreWeaviate
Features:
- AI-native architecture
- Automatic schema inference
- Multi-modal support
Chroma
Node:@n8n/n8n-nodes-langchain.vectorStoreChromaDB
Features:
- Embedded or client-server
- Easy local development
- Auto-batching
Redis
Node:@n8n/n8n-nodes-langchain.vectorStoreRedis
Features:
- Fast in-memory search
- Hybrid queries (vector + filters)
- Real-time indexing
PGVector
Node:@n8n/n8n-nodes-langchain.vectorStorePGVector
Features:
- Native PostgreSQL extension
- ACID compliance
- Standard SQL queries
MongoDB Atlas
Node:@n8n/n8n-nodes-langchain.vectorStoreMongoDBAtlas
Features:
- Document-native vector search
- Flexible schema
- Atlas search integration
Milvus
Node:@n8n/n8n-nodes-langchain.vectorStoreMilvus
Features:
- Cloud-native architecture
- Billion-scale support
- Multiple index types
Zep
Node:@n8n/n8n-nodes-langchain.vectorStoreZep
Features:
- Memory-optimized
- Automatic summarization
- Fact extraction
Azure AI Search
Node:@n8n/n8n-nodes-langchain.vectorStoreAzureAISearch
Features:
- Integrated with Azure
- Cognitive search
- Hybrid search
Building a RAG Pipeline
Metadata Filtering
Most vector stores support filtering by metadata:Best Practices
Choosing a Vector Store
Chunking Strategy
- Chunk Size: 500-1000 characters typically works well
- Overlap: 10-20% overlap preserves context
- Splitter: Use Recursive Character Text Splitter for best results
Embedding Models
- Match dimensions: Vector store dimensions must match embedding model
- Consistency: Always use the same embedding model for insert and retrieval
- Cost vs Quality: Balance between performance and API costs
Performance
- Batch inserts: Insert documents in batches for better performance
- Index configuration: Configure appropriate index types (IVF, HNSW)
- Cache embeddings: Avoid re-embedding the same content
- Monitor costs: Track API usage for embedding generation
Metadata
- Add useful metadata: Include source, date, category, etc.
- Keep it structured: Use consistent schema across documents
- Enable filtering: Design metadata for efficient filtering
Common Patterns
Multi-Tenant RAG
Use namespaces or metadata filtering:Hybrid Search
Combine vector search with keyword search:Incremental Updates
Update existing documents:Troubleshooting
Dimension Mismatch
Error: “Vector dimension mismatch” Solution: Ensure your vector store is configured with the correct dimensions for your embedding model.Poor Search Results
Causes:- Chunk size too large or too small
- Wrong embedding model
- Insufficient data
- Missing context in chunks
- Adjust chunk size and overlap
- Try different embedding models
- Add more documents
- Include surrounding context
Slow Performance
Solutions:- Enable vector indexing (IVF, HNSW)
- Reduce topK parameter
- Use faster embedding models
- Add metadata filters to narrow search
- Consider caching strategies
Next Steps
Embeddings
Learn about embedding models
Retrievers
Configure retrieval strategies
Q&A Chains
Build RAG applications
Agent Tools
Use vector stores as agent tools