Skip to main content

BufferMemory Example

BufferMemory is the simplest memory strategy that keeps conversation history in-process. It maintains a fixed number of recent messages, making it ideal for development and simple use cases.

What BufferMemory Does

BufferMemory:
  • Stores conversation history in memory (RAM)
  • Keeps a configurable maximum number of messages
  • Automatically removes oldest messages when the limit is reached
  • Maintains context across multiple agent runs using session IDs
  • Perfect for development, testing, and simple applications

Complete Working Example

import 'dotenv/config'

import { createAgent, AgentInstance } from '@agentlib/core'
import { openai } from '@agentlib/openai'
import { BufferMemory } from '@agentlib/memory'
import { createLogger } from '@agentlib/logger'

/**
 * Chat History Example
 * 
 * This example demonstrates how to use the MemoryProvider to maintain
 * context across multiple runs using sessions.
 */

async function main() {
    // 1. Initialize Memory
    // BufferMemory keeps history in-process. 
    // In a real app, you might use a Redis or Database provider.
    const memory = new BufferMemory({
        maxMessages: 10 // Keep last 10 messages
    })

    // 2. Setup the Agent
    const agent = createAgent({
        name: 'memory-demo-agent',
        systemPrompt: "You are a friendly assistant. Remember the user's name and preferences.",
    })
        .provider(openai({
            apiKey: process.env['OPENAI_API_KEY'] ?? '',
            model: process.env['OPENAI_MODEL'] ?? 'gpt-4o-mini'
        }))
        .memory(memory)
        .use(createLogger({ level: 'info' }))

    const sessionId = 'user-session-123'

    console.log('--- Conversation Start ---\n')

    // First Turn: Introduce ourselves
    console.log('> User: Hi! My name is Sammy and I love coding in TypeScript.')
    const res1 = await agent.run({
        input: 'Hi! My name is Sammy and I love coding in TypeScript.',
        sessionId
    })
    console.log(`\nAgent: ${res1.output}\n`)

    // Second Turn: Ask a follow-up without repeating the context
    console.log('> User: What is my favorite language?')
    const res2 = await agent.run({
        input: 'What is my favorite language?',
        sessionId
    })
    console.log(`\nAgent: ${res2.output}\n`)

    // Third Turn: Ask about the name
    console.log('> User: Do you remember my name?')
    const res3 = await agent.run({
        input: 'Do you remember my name?',
        sessionId
    })
    console.log(`\nAgent: ${res3.output}\n`)

    console.log('--- Inspecting Memory ---')
    const entries = await memory.entries(sessionId)
    console.log(`Messages in session "${sessionId}":`, entries[0]?.messages.length)

    console.log('\n--- Conversation End ---')
}

main().catch(console.error)

Key Configuration

maxMessages

Controls how many messages to keep in memory:
const memory = new BufferMemory({
    maxMessages: 10 // Keep last 10 messages
})

How It Works

  1. Session Management: Use a unique sessionId to maintain separate conversation contexts
  2. Automatic Pruning: When the message count exceeds maxMessages, oldest messages are removed
  3. Context Persistence: The agent remembers previous conversation turns within the same session
  4. Memory Inspection: You can inspect stored messages using memory.entries(sessionId)

When to Use BufferMemory

  • Development and testing
  • Simple applications with low traffic
  • Short conversations that don’t exceed token limits
  • When you don’t need persistence across restarts

Production Considerations

For production applications, consider:
  • Using a database-backed memory provider for persistence
  • Implementing Redis for distributed systems
  • Using SlidingWindowMemory or SummarizingMemory for long conversations
  • Monitoring memory usage in high-traffic scenarios

Build docs developers (and LLMs) love