Choose and configure the right memory provider for your agent
Memory providers enable agents to maintain context across multiple interactions. AgentLIB offers several memory strategies, each optimized for different use cases.
The simplest memory provider keeps recent messages in memory.
import { createAgent } from '@agentlib/core'import { BufferMemory } from '@agentlib/memory'import { openai } from '@agentlib/openai'const memory = new BufferMemory({ maxMessages: 10 // Keep last 10 messages})const agent = createAgent({ name: 'chat-agent' }) .provider(openai({ apiKey: process.env.OPENAI_API_KEY })) .memory(memory)const sessionId = 'user-123'// First conversationawait agent.run({ input: 'Hi! My name is Alice.', sessionId,})// Second conversation - remembers previous contextawait agent.run({ input: 'What is my name?', sessionId,})// Agent responds: "Your name is Alice."
BufferMemory is ideal for development and short conversations. For production, use persistent memory providers.
Automatically trims messages to fit within a token budget while keeping recent context.
1
Configure Sliding Window
import { SlidingWindowMemory } from '@agentlib/memory'const memory = new SlidingWindowMemory({ maxTokens: 300, // Token budget maxTurns: 5, // Maximum conversation turns})
2
Attach to Agent
const agent = createAgent({ name: 'sliding-agent', systemPrompt: 'You are a helpful assistant.',}) .provider(openai({ apiKey: process.env.OPENAI_API_KEY })) .memory(memory)
3
Use Across Multiple Turns
const sessionId = 'session-456'const turns = [ "My first favorite fruit is Apple.", "My second favorite fruit is Banana.", "My third favorite fruit is Cherry.", "My fourth favorite fruit is Dragonfruit.", "My fifth favorite fruit is Elderberry.", "Can you list all the fruits I mentioned?"]for (const input of turns) { const res = await agent.run({ input, sessionId }) console.log(res.output)}
Compresses old conversation history into a concise summary using a dedicated LLM.
import { SummarizingMemory } from '@agentlib/memory'import { openai } from '@agentlib/openai'// Use a cheaper/faster model for summarizationconst summarizerModel = openai({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o-mini',})const memory = new SummarizingMemory({ model: summarizerModel, activeWindowTokens: 250, // Trigger summarization at 250 tokens summaryPrompt: 'Summarize the user profile and preferences accurately.',})const agent = createAgent({ name: 'summarizer-agent', systemPrompt: 'You are a personalized travel assistant.',}) .provider(openai({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o', })) .memory(memory)const sessionId = 'travel-planner'const interaction = [ "I am planning a trip to Japan next April. I love sushi and nature.", "I want to stay for 2 weeks. My budget is around $5000.", "I prefer boutique hotels over large chains.", "I also want to visit some hidden gems, not just tourist spots.", "Tell me what you know about my travel preferences so far."]for (const input of interaction) { const res = await agent.run({ input, sessionId }) console.log(res.output) // Check if a summary has been generated const currentSummary = memory.getSummary(sessionId) if (currentSummary) { console.log('Summary:', currentSummary) }}
All memory providers use session IDs to scope conversations:
const memory = new BufferMemory({ maxMessages: 10 })const agent = createAgent({ name: 'agent' }) .provider(model) .memory(memory)// Conversation for user Aawait agent.run({ input: 'My name is Alice.', sessionId: 'user-alice',})// Conversation for user B (completely separate)await agent.run({ input: 'My name is Bob.', sessionId: 'user-bob',})// Retrieve Alice's conversationawait agent.run({ input: 'What is my name?', sessionId: 'user-alice',})// Agent responds: "Your name is Alice."
// Get raw entries for a sessionconst entries = await memory.entries('user-123')console.log(`Messages in session: ${entries[0]?.messages.length}`)// Clear a specific sessionawait memory.clear('user-123')// Clear all sessionsawait memory.clear()
import 'dotenv/config'import { createAgent } from '@agentlib/core'import { openai } from '@agentlib/openai'import { BufferMemory } from '@agentlib/memory'import { createLogger } from '@agentlib/logger'const memory = new BufferMemory({ maxMessages: 10})const agent = createAgent({ name: 'memory-demo-agent', systemPrompt: "You are a friendly assistant. Remember the user's name and preferences.",}) .provider(openai({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o-mini' })) .memory(memory) .use(createLogger({ level: 'info' }))const sessionId = 'user-session-123'console.log('--- Conversation Start ---\n')// First turnconsole.log('> User: Hi! My name is Sammy and I love coding in TypeScript.')const res1 = await agent.run({ input: 'Hi! My name is Sammy and I love coding in TypeScript.', sessionId})console.log(`\nAgent: ${res1.output}\n`)// Second turn - agent remembers contextconsole.log('> User: What is my favorite language?')const res2 = await agent.run({ input: 'What is my favorite language?', sessionId})console.log(`\nAgent: ${res2.output}\n`)// Third turnconsole.log('> User: Do you remember my name?')const res3 = await agent.run({ input: 'Do you remember my name?', sessionId})console.log(`\nAgent: ${res3.output}\n`)// Inspect memoryconst entries = await memory.entries(sessionId)console.log(`Messages in session: ${entries[0]?.messages.length}`)