Skip to main content
Memory enables your AI agents to remember previous interactions, maintain context across conversations, and provide personalized responses. Flowise provides multiple memory options for different use cases.

What is Memory?

Memory in AI workflows stores:
  • Conversation History: Previous messages and responses
  • Context: Important information from past interactions
  • State: Variables and data across sessions
  • User Preferences: Personalization data
Without memory, each interaction starts fresh with no knowledge of previous conversations.

Memory Types

Flowise offers different memory types for chatflows and agentflows:

Chatflow Memory

Buffer Memory

Stores all conversation history
  • Simple and straightforward
  • Grows unbounded
  • Best for short conversations

Buffer Window Memory

Keeps last N messages
  • Fixed size window
  • Prevents token overflow
  • Good for long conversations

Conversation Summary Memory

Summarizes old messages
  • Reduces token usage
  • Maintains key information
  • Best for very long conversations

Summary Buffer Memory

Combines window + summary
  • Recent messages in full
  • Older messages summarized
  • Balanced approach

Agentflow Memory

Agentflows use specialized Agent Memory nodes:
  • SQLite Agent Memory: Local file-based storage
  • PostgreSQL Agent Memory: Scalable database storage
  • MySQL Agent Memory: MySQL database integration
Regular chatflow memory nodes (Buffer Memory, etc.) are NOT compatible with agentflows. Always use Agent Memory nodes in agentflows.

Adding Memory to Chatflows

1

Choose Memory Type

Select the appropriate memory node from the Memory category based on your use case.
2

Add Memory Node

Drag the memory node onto your canvas.
3

Connect to Chain

Connect the memory node to your chain node (Conversational Retrieval Chain, LLM Chain, etc.).
[Chat Model] ────→ [Conversational Chain]

                  [Buffer Memory]
4

Configure Memory

Double-click the memory node to configure:
  • Memory key (default: “chat_history”)
  • Session ID handling
  • Storage backend (for persistent memory)
5

Enable Session IDs

For multi-user applications, enable session IDs to isolate conversations:
  • Check “Session ID” option in memory configuration
  • Pass unique sessionId in API requests

Memory Configuration

Buffer Memory

Stores complete conversation history:
// Configuration
{
  "memoryKey": "chat_history",
  "inputKey": "input",
  "outputKey": "output",
  "returnMessages": true
}
Best for:
  • Short conversations (under 10 exchanges)
  • Detailed context requirements
  • Testing and development
Limitations:
  • No limit on history size
  • Can exceed token limits
  • Memory usage grows continuously

Buffer Window Memory

Maintains a sliding window of recent messages:
// Configuration
{
  "memoryKey": "chat_history",
  "k": 5,  // Keep last 5 exchanges
  "returnMessages": true
}
Best for:
  • Medium to long conversations
  • Token-constrained models
  • Most production use cases
Configuration Options:
  • k: Number of message pairs to keep (default: 5)
  • Higher k = more context but more tokens
  • Lower k = less context but fewer tokens

Conversation Summary Memory

Summarizes older messages to save tokens:
// Configuration
{
  "memoryKey": "chat_history",
  "llm": ChatOpenAI,  // LLM for summarization
  "maxTokenLimit": 1000  // When to start summarizing
}
Best for:
  • Very long conversations
  • Token optimization
  • Cost reduction
Trade-offs:
  • Additional LLM calls for summarization
  • Potential information loss
  • Slight latency increase

Conversation Summary Buffer Memory

Combines window and summary approaches:
// Configuration  
{
  "memoryKey": "chat_history",
  "maxTokenLimit": 1000,
  "llm": ChatOpenAI,
  "k": 3  // Keep last 3 full, summarize older
}
Best for:
  • Balanced token usage
  • Production applications
  • Long-term conversations

Persistent Memory Storage

Store memory across sessions using external storage:

Redis-Backed Memory

1

Add Redis Memory Node

Use Redis Backed Chat Memory or Upstash Redis Backed Chat Memory.
2

Configure Redis Connection

  • Redis URL or connection details
  • Session ID prefix
  • TTL (time to live) for messages
3

Connect to Workflow

Connect Redis memory to your chain node like regular memory.
4

Use Session IDs

Always pass sessionId in API requests:
{
  "question": "Hello",
  "overrideConfig": {
    "sessionId": "user-123-session"
  }
}

MongoDB Memory

Store conversations in MongoDB:
// Configuration
{
  "mongodbUrl": "mongodb://localhost:27017",
  "databaseName": "flowise",
  "collectionName": "chat_history",
  "sessionId": "user-session-id"
}

Zep Memory

Use Zep for advanced memory management:
  • Automatic Summarization: Intelligent context compression
  • Fact Extraction: Extract and store key facts
  • Memory Search: Semantic search across history
  • Multi-user: Built-in session management
Zep Memory Cloud offers managed hosting. Zep Memory allows self-hosting.

DynamoDB Memory

AWS DynamoDB for serverless storage:
// Configuration
{
  "tableName": "flowise-chat-history",
  "partitionKey": "sessionId",
  "sortKey": "timestamp",
  "region": "us-east-1"
}

Adding Memory to Agentflows

1

Add Agent Memory Node

From the Memory category, add one of:
  • SQLite Agent Memory (development)
  • PostgreSQL Agent Memory (production)
  • MySQL Agent Memory (production)
2

Configure Database

For PostgreSQL/MySQL:
  • Database connection string
  • Table name (auto-created if needed)
  • Session configuration
For SQLite:
  • File path for database
  • Auto-creates file if not exists
3

Connect in Flow

Agent memory nodes connect differently than chatflow memory:
[Start] → [Agent Node with Memory] → [Tool] → [End]
The memory is configured within the agent node settings.
4

Enable Checkpointing

Agent memory uses checkpointing for state management:
  • Automatically saves state at each step
  • Enables time-travel debugging
  • Supports branching conversations

Session Management

Session IDs

Session IDs isolate conversations between users:
// API request with session ID
fetch('/api/v1/prediction/chatflow-id', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    question: "What did we discuss earlier?",
    overrideConfig: {
      sessionId: "user-12345"  // Unique per user
    }
  })
});

Session ID Best Practices

Use Unique IDs

Generate unique session IDs per user and conversation

Persist IDs

Store session IDs in your application’s user session

Handle Expiry

Implement session expiration and cleanup

Secure IDs

Don’t expose session IDs to other users

Multi-Tenant Isolation

For multi-tenant applications:
// Include tenant ID in session ID
const sessionId = `tenant-${tenantId}-user-${userId}-${conversationId}`;
This ensures complete isolation between tenants.

Memory Patterns

Short-Term Memory

For single conversations:
[Chat Model] → [Conversational Chain]

                [Buffer Window Memory (k=5)]

Long-Term Memory

For persistent user context:
[Chat Model] → [Conversational Chain]

                [Redis Memory + Summary]

Hybrid Memory

Combine multiple memory types:
[Chat Model] → [Conversational Chain]

              ┌────────┴────────┐
      [Window Memory]  [Vector Store Memory]
  • Window memory: Recent conversation
  • Vector store: Long-term facts and context

Memory Optimization

Token Management

Optimize token usage:
  1. Use Window Memory: Limit history size
  2. Enable Summarization: Compress old messages
  3. Set Token Limits: Configure maxTokenLimit appropriately
  4. Filter Messages: Remove system messages or metadata

Performance Optimization

  • Batch Operations: Group memory operations when possible
  • Async Storage: Use async writes to external storage
  • Caching: Cache frequently accessed memory
  • Indexing: Add indexes to database memory tables

Cost Optimization

  • Use in-memory or SQLite
  • No external dependencies
  • Free for testing

Advanced Memory Features

Memory Retrieval

Query memory programmatically:
// Get conversation history for a session
GET /api/v1/chatmessage/{chatflowId}
?sessionId=user-session-id
&startDate=2024-01-01

Memory Deletion

Clear memory for privacy compliance:
// Delete conversation history
DELETE /api/v1/chatmessage/{chatflowId}
?sessionId=user-session-id

Memory Export

Export conversations for analysis:
  • JSON format with full message history
  • Includes timestamps and metadata
  • Useful for training and analytics

Debugging Memory Issues

  • Verify session ID is consistent across requests
  • Check memory node is properly connected
  • For external storage, verify connection credentials
  • Check database/storage logs for errors
  • Increase window size (k parameter)
  • Check if session IDs are being passed
  • Verify memory key matches chain expectations
  • Review summarization settings
  • Reduce window size
  • Enable summarization
  • Use Summary Buffer Memory
  • Set maxTokenLimit parameter
  • Optimize database queries
  • Add indexes to memory tables
  • Use Redis for faster access
  • Enable caching
  • Ensure unique session IDs per user
  • Verify session ID is in override config
  • Check for session ID collisions
  • Review multi-tenant isolation

Memory Security

Memory can contain sensitive user data. Implement appropriate security measures.

Security Best Practices

  1. Encryption: Encrypt memory at rest and in transit
  2. Access Control: Restrict memory access by session ID
  3. Retention Policies: Auto-delete old conversations
  4. Audit Logs: Track memory access and modifications
  5. Data Privacy: Comply with GDPR, CCPA, and other regulations

Sensitive Data Handling

  • Filter PII: Remove personally identifiable information
  • Redact Credentials: Never store API keys in memory
  • Anonymize: Hash or tokenize sensitive data
  • Expiry: Set TTL on memory storage

Memory API Reference

Get Chat Messages

GET /api/v1/chatmessage/{chatflowId}
?sessionId={sessionId}
&sort=ASC
&startDate=2024-01-01
&endDate=2024-12-31

Delete Messages

DELETE /api/v1/chatmessage/{chatflowId}
?sessionId={sessionId}
See Chat Message API Reference for complete documentation.

Best Practices Summary

Choose Right Type

Select memory type based on conversation length and token budget

Use Session IDs

Always use session IDs for multi-user applications

Monitor Token Usage

Track and optimize memory token consumption

Implement Cleanup

Set up automatic memory cleanup and retention policies

Test Edge Cases

Test with very long conversations and edge cases

Secure Data

Encrypt and protect sensitive conversation data

Next Steps

Build docs developers (and LLMs) love