What is Memory?
Memory in AI workflows stores:- Conversation History: Previous messages and responses
- Context: Important information from past interactions
- State: Variables and data across sessions
- User Preferences: Personalization data
Memory Types
Flowise offers different memory types for chatflows and agentflows:Chatflow Memory
Buffer Memory
Stores all conversation history
- Simple and straightforward
- Grows unbounded
- Best for short conversations
Buffer Window Memory
Keeps last N messages
- Fixed size window
- Prevents token overflow
- Good for long conversations
Conversation Summary Memory
Summarizes old messages
- Reduces token usage
- Maintains key information
- Best for very long conversations
Summary Buffer Memory
Combines window + summary
- Recent messages in full
- Older messages summarized
- Balanced approach
Agentflow Memory
Agentflows use specialized Agent Memory nodes:- SQLite Agent Memory: Local file-based storage
- PostgreSQL Agent Memory: Scalable database storage
- MySQL Agent Memory: MySQL database integration
Adding Memory to Chatflows
Choose Memory Type
Select the appropriate memory node from the Memory category based on your use case.
Connect to Chain
Connect the memory node to your chain node (Conversational Retrieval Chain, LLM Chain, etc.).
Configure Memory
Double-click the memory node to configure:
- Memory key (default: “chat_history”)
- Session ID handling
- Storage backend (for persistent memory)
Memory Configuration
Buffer Memory
Stores complete conversation history:- Short conversations (under 10 exchanges)
- Detailed context requirements
- Testing and development
- No limit on history size
- Can exceed token limits
- Memory usage grows continuously
Buffer Window Memory
Maintains a sliding window of recent messages:- Medium to long conversations
- Token-constrained models
- Most production use cases
- k: Number of message pairs to keep (default: 5)
- Higher k = more context but more tokens
- Lower k = less context but fewer tokens
Conversation Summary Memory
Summarizes older messages to save tokens:- Very long conversations
- Token optimization
- Cost reduction
- Additional LLM calls for summarization
- Potential information loss
- Slight latency increase
Conversation Summary Buffer Memory
Combines window and summary approaches:- Balanced token usage
- Production applications
- Long-term conversations
Persistent Memory Storage
Store memory across sessions using external storage:Redis-Backed Memory
Configure Redis Connection
- Redis URL or connection details
- Session ID prefix
- TTL (time to live) for messages
MongoDB Memory
Store conversations in MongoDB:Zep Memory
Use Zep for advanced memory management:- Automatic Summarization: Intelligent context compression
- Fact Extraction: Extract and store key facts
- Memory Search: Semantic search across history
- Multi-user: Built-in session management
Zep Memory Cloud offers managed hosting. Zep Memory allows self-hosting.
DynamoDB Memory
AWS DynamoDB for serverless storage:Adding Memory to Agentflows
Add Agent Memory Node
From the Memory category, add one of:
- SQLite Agent Memory (development)
- PostgreSQL Agent Memory (production)
- MySQL Agent Memory (production)
Configure Database
For PostgreSQL/MySQL:
- Database connection string
- Table name (auto-created if needed)
- Session configuration
- File path for database
- Auto-creates file if not exists
Connect in Flow
Agent memory nodes connect differently than chatflow memory:The memory is configured within the agent node settings.
Session Management
Session IDs
Session IDs isolate conversations between users:Session ID Best Practices
Use Unique IDs
Generate unique session IDs per user and conversation
Persist IDs
Store session IDs in your application’s user session
Handle Expiry
Implement session expiration and cleanup
Secure IDs
Don’t expose session IDs to other users
Multi-Tenant Isolation
For multi-tenant applications:Memory Patterns
Short-Term Memory
For single conversations:Long-Term Memory
For persistent user context:Hybrid Memory
Combine multiple memory types:- Window memory: Recent conversation
- Vector store: Long-term facts and context
Memory Optimization
Token Management
Optimize token usage:- Use Window Memory: Limit history size
- Enable Summarization: Compress old messages
- Set Token Limits: Configure maxTokenLimit appropriately
- Filter Messages: Remove system messages or metadata
Performance Optimization
- Batch Operations: Group memory operations when possible
- Async Storage: Use async writes to external storage
- Caching: Cache frequently accessed memory
- Indexing: Add indexes to database memory tables
Cost Optimization
- Development
- Production
- Scale
- Use in-memory or SQLite
- No external dependencies
- Free for testing
Advanced Memory Features
Memory Retrieval
Query memory programmatically:Memory Deletion
Clear memory for privacy compliance:Memory Export
Export conversations for analysis:- JSON format with full message history
- Includes timestamps and metadata
- Useful for training and analytics
Debugging Memory Issues
Memory not persisting
Memory not persisting
- Verify session ID is consistent across requests
- Check memory node is properly connected
- For external storage, verify connection credentials
- Check database/storage logs for errors
Context not maintained
Context not maintained
- Increase window size (k parameter)
- Check if session IDs are being passed
- Verify memory key matches chain expectations
- Review summarization settings
Token limit exceeded
Token limit exceeded
- Reduce window size
- Enable summarization
- Use Summary Buffer Memory
- Set maxTokenLimit parameter
Slow memory operations
Slow memory operations
- Optimize database queries
- Add indexes to memory tables
- Use Redis for faster access
- Enable caching
Memory leaking between users
Memory leaking between users
- Ensure unique session IDs per user
- Verify session ID is in override config
- Check for session ID collisions
- Review multi-tenant isolation
Memory Security
Security Best Practices
- Encryption: Encrypt memory at rest and in transit
- Access Control: Restrict memory access by session ID
- Retention Policies: Auto-delete old conversations
- Audit Logs: Track memory access and modifications
- Data Privacy: Comply with GDPR, CCPA, and other regulations
Sensitive Data Handling
- Filter PII: Remove personally identifiable information
- Redact Credentials: Never store API keys in memory
- Anonymize: Hash or tokenize sensitive data
- Expiry: Set TTL on memory storage
Memory API Reference
Get Chat Messages
Delete Messages
Best Practices Summary
Choose Right Type
Select memory type based on conversation length and token budget
Use Session IDs
Always use session IDs for multi-user applications
Monitor Token Usage
Track and optimize memory token consumption
Implement Cleanup
Set up automatic memory cleanup and retention policies
Test Edge Cases
Test with very long conversations and edge cases
Secure Data
Encrypt and protect sensitive conversation data
Next Steps
- Learn Prompt Engineering to work effectively with memory
- Explore Creating Chatflows to build memory-enabled workflows
- Review API Integration for session management
