Skip to main content
Airi’s memory system manages conversation history, context retrieval, and persistent storage. It uses DuckDB for in-browser storage and provides configurable retention and retrieval policies.

Memory Architecture

Airi’s memory system has multiple layers:
┌─────────────────────────────────────────┐
│         Conversation Layer              │
│   (Active context in working memory)    │
└─────────────────────────────────────────┘

┌─────────────────────────────────────────┐
│         Session Storage Layer           │
│  (IndexedDB via unstorage + DuckDB)     │
└─────────────────────────────────────────┘

┌─────────────────────────────────────────┐
│         Context Providers Layer         │
│   (Dynamic context from modules)        │
└─────────────────────────────────────────┘

Session Management

Conversations are organized into sessions:
interface ChatSessionMeta {
  sessionId: string
  title: string
  createdAt: number
  updatedAt: number
  userId: string
  characterId: string
  pinned: boolean
  archived: boolean
  tags: string[]
}

interface ChatHistoryItem {
  id: string
  role: 'system' | 'user' | 'assistant' | 'tool'
  content: string | CommonContentPart[]
  createdAt: number
  
  // For assistant messages
  slices?: ChatSlices[]
  tool_results?: ToolResult[]
  categorization?: {
    speech: string
    reasoning?: string
  }
}

Database Configuration

Storage Backend

Airi uses multiple storage backends:
// packages/stage-ui/src/database/storage.ts
import { createStorage } from 'unstorage'
import indexedDbDriver from 'unstorage/drivers/indexedb'
import memoryDriver from 'unstorage/drivers/memory'

export const storage = createStorage({
  driver: memoryDriver()
})

// Persistent storage
storage.mount('local', indexedDbDriver({ 
  base: 'airi-local' 
}))

// Sync queue (for cloud sync)
storage.mount('outbox', indexedDbDriver({ 
  base: 'airi-sync-queue' 
}))
Storage Mounts:
  • local: Persistent IndexedDB storage
  • outbox: Queue for cloud synchronization
  • memory: In-memory cache (default)

DuckDB Integration

For advanced querying and analytics:
import { DuckDB } from '@proj-airi/duckdb-wasm'
import { drizzle } from '@proj-airi/drizzle-duckdb-wasm'

// Initialize DuckDB in browser
const db = await DuckDB.create()
const orm = drizzle(db)

// Query conversations
const recentChats = await orm
  .select()
  .from(chatSessions)
  .where(eq(chatSessions.userId, currentUser))
  .orderBy(desc(chatSessions.updatedAt))
  .limit(10)
DuckDB Features:
  • SQL queries on conversation history
  • Aggregations and analytics
  • Full-text search (future)
  • Vector similarity search (future)

Context Management

Active Context

Context is dynamically provided by modules:
interface ContextMessage {
  sourceType: string           // "module:discord", "sensor:time"
  sourceId: string
  content: Record<string, unknown>
  strategy: ContextUpdateStrategy
  priority?: number
  expiresAt?: number
}

enum ContextUpdateStrategy {
  ReplaceSelf = 'replace-self',   // Replace previous context from this source
  AppendSelf = 'append-self'      // Append to contexts from this source
}
Built-in Context Providers:
  • Datetime: Current date/time
  • User Profile: User preferences and info
  • Module State: State from active modules (Discord, Minecraft, etc.)

Context Injection

Context is injected before each LLM call:
// From chat orchestrator (packages/stage-ui/src/stores/chat.ts)

const contextsSnapshot = chatContext.getContextsSnapshot()
if (Object.keys(contextsSnapshot).length > 0) {
  const contextMessage = {
    role: 'user',
    content: [
      {
        type: 'text',
        text: Object.entries(contextsSnapshot)
          .map(([key, value]) => 
            `Module ${key}: ${JSON.stringify(value)}`
          )
          .join('\n')
      }
    ]
  }
  
  // Insert after system prompt, before conversation
  messages = [
    systemMessage,
    contextMessage,
    ...conversationMessages
  ]
}
Example Context:
{
  "module:datetime": {
    "timestamp": 1735689600000,
    "formatted": "2026-03-04T18:00:00Z",
    "timezone": "UTC"
  },
  "module:discord": {
    "activeChannel": "general",
    "recentMentions": 3,
    "voiceConnected": false
  }
}

Message History Management

Session Creation

import { useChatSessionStore } from '@proj-airi/stage-ui'

const sessionStore = useChatSessionStore()

// Create new session
const sessionId = await sessionStore.createSession({
  title: 'New Conversation',
  characterId: 'default'
})

// Switch to session
sessionStore.activeSessionId = sessionId

Message Persistence

Messages are automatically persisted after each turn:
// Automatic persistence
sessionMessagesForSend.push({
  role: 'user',
  content: userMessage,
  createdAt: Date.now(),
  id: nanoid()
})

chatSession.persistSessionMessages(sessionId)

// After assistant response
sessionMessagesForSend.push(assistantMessage)
chatSession.persistSessionMessages(sessionId)
Persistence is queued to avoid blocking:
function enqueuePersist(task: () => Promise<void>) {
  persistQueue = persistQueue.then(task, task)
  return persistQueue
}

Context Window Limits

Messages are automatically pruned to fit model context:
// Approximate token counting
function estimateTokens(text: string): number {
  // Rough estimate: 1 token ≈ 4 characters
  return Math.ceil(text.length / 4)
}

function pruneMessages(
  messages: ChatHistoryItem[],
  maxTokens: number
): ChatHistoryItem[] {
  // Always keep system prompt
  const systemMsg = messages[0]
  let remainingMessages = messages.slice(1)
  
  let totalTokens = estimateTokens(systemMsg.content)
  const kept: ChatHistoryItem[] = [systemMsg]
  
  // Keep recent messages, remove oldest first
  for (let i = remainingMessages.length - 1; i >= 0; i--) {
    const msg = remainingMessages[i]
    const tokens = estimateTokens(msg.content)
    
    if (totalTokens + tokens > maxTokens) break
    
    totalTokens += tokens
    kept.unshift(msg)
  }
  
  return kept
}
Default Limits:
  • GPT-4o: 120k tokens (留 8k for response)
  • Claude 3.5: 180k tokens (留 20k for response)
  • Local models: Varies (typically 4k-8k)

Retention Policies

Session Retention

Configure how long to keep sessions:
interface RetentionPolicy {
  maxSessions: number         // Max total sessions to keep
  maxAge: number              // Max age in milliseconds
  archiveAfter: number        // Archive inactive after this long
  deleteArchived: boolean     // Delete archived sessions
}

const defaultPolicy: RetentionPolicy = {
  maxSessions: 1000,
  maxAge: 90 * 24 * 60 * 60 * 1000,  // 90 days
  archiveAfter: 30 * 24 * 60 * 60 * 1000,  // 30 days
  deleteArchived: false
}
Implementation (custom):
async function applyRetentionPolicy(policy: RetentionPolicy) {
  const sessions = await sessionStore.getAllSessions()
  const now = Date.now()
  
  // Archive old sessions
  for (const session of sessions) {
    const age = now - session.updatedAt
    if (age > policy.archiveAfter && !session.archived) {
      await sessionStore.archiveSession(session.sessionId)
    }
  }
  
  // Delete expired
  if (policy.deleteArchived) {
    for (const session of sessions) {
      const age = now - session.createdAt
      if (session.archived && age > policy.maxAge) {
        await sessionStore.deleteSession(session.sessionId)
      }
    }
  }
  
  // Limit total count
  const active = sessions
    .filter(s => !s.archived)
    .sort((a, b) => b.updatedAt - a.updatedAt)
  
  if (active.length > policy.maxSessions) {
    const toArchive = active.slice(policy.maxSessions)
    for (const session of toArchive) {
      await sessionStore.archiveSession(session.sessionId)
    }
  }
}

// Run periodically
setInterval(() => applyRetentionPolicy(defaultPolicy), 3600000) // 1 hour

Message-Level Retention

For fine-grained control:
interface MessageRetentionPolicy {
  keepSystemMessages: boolean     // Always keep system prompts
  maxMessagesPerSession: number   // Max messages per session
  summarizeOld: boolean           // Summarize old messages
  summaryThreshold: number        // Messages before summarizing
}

const messagePolicy: MessageRetentionPolicy = {
  keepSystemMessages: true,
  maxMessagesPerSession: 200,
  summarizeOld: true,
  summaryThreshold: 50
}

Context Retrieval

Semantic Search (Future)

Planned feature using embeddings:
// Future API
interface EmbeddingConfig {
  provider: string              // "openai", "local"
  model: string                 // "text-embedding-3-small"
  dimensions: number            // 1536
}

interface SearchQuery {
  query: string
  limit: number
  threshold: number             // Similarity threshold (0-1)
  filters?: {
    sessionIds?: string[]
    characterIds?: string[]
    dateRange?: [number, number]
    roles?: Array<'user' | 'assistant'>
  }
}

// Search conversation history
const results = await memorySystem.search({
  query: "how to configure providers",
  limit: 5,
  threshold: 0.75,
  filters: {
    roles: ['assistant']
  }
})
Basic text search:
function searchMessages(
  sessions: Record<string, ChatHistoryItem[]>,
  query: string
): Array<{ sessionId: string, message: ChatHistoryItem }> {
  const results: Array<{ sessionId: string, message: ChatHistoryItem }> = []
  const queryLower = query.toLowerCase()
  
  for (const [sessionId, messages] of Object.entries(sessions)) {
    for (const message of messages) {
      const content = extractMessageContent(message)
      if (content.toLowerCase().includes(queryLower)) {
        results.push({ sessionId, message })
      }
    }
  }
  
  return results
}

Memory Retrieval Settings

Retrieval Strategy

Configure how context is retrieved:
interface RetrievalConfig {
  strategy: 'recent' | 'relevant' | 'hybrid'
  recentCount: number           // For 'recent' strategy
  relevanceThreshold: number    // For 'relevant' strategy (0-1)
  hybridWeights: {
    recency: number              // 0-1
    relevance: number            // 0-1
  }
}

const retrievalConfig: RetrievalConfig = {
  strategy: 'hybrid',
  recentCount: 10,
  relevanceThreshold: 0.7,
  hybridWeights: {
    recency: 0.6,
    relevance: 0.4
  }
}
Strategies:
  • Recent: Simply take last N messages
  • Relevant: Semantic search for related messages (requires embeddings)
  • Hybrid: Combine recency and relevance

Context Summarization

For long conversations:
async function summarizeOldContext(
  messages: ChatHistoryItem[],
  threshold: number
): Promise<ChatHistoryItem[]> {
  if (messages.length <= threshold) return messages
  
  const system = messages[0]
  const toSummarize = messages.slice(1, -threshold)
  const recent = messages.slice(-threshold)
  
  // Generate summary using LLM
  const summary = await llm.generate({
    prompt: `Summarize this conversation concisely:\n${toSummarize.map(m => `${m.role}: ${m.content}`).join('\n')}`,
    model: 'gpt-4o-mini'
  })
  
  const summaryMessage: ChatHistoryItem = {
    role: 'system',
    content: `Previous conversation summary: ${summary}`,
    createdAt: Date.now(),
    id: nanoid()
  }
  
  return [system, summaryMessage, ...recent]
}

Performance Tuning

Database Optimization

IndexedDB Settings:
const dbConfig = {
  name: 'airi-local',
  version: 1,
  stores: {
    chatSessions: {
      keyPath: 'sessionId',
      indexes: [
        { name: 'userId', keyPath: 'userId' },
        { name: 'characterId', keyPath: 'characterId' },
        { name: 'updatedAt', keyPath: 'updatedAt' },
        { name: 'createdAt', keyPath: 'createdAt' }
      ]
    },
    chatMessages: {
      keyPath: 'id',
      indexes: [
        { name: 'sessionId', keyPath: 'sessionId' },
        { name: 'createdAt', keyPath: 'createdAt' },
        { name: 'role', keyPath: 'role' }
      ]
    }
  }
}
Query Optimization:
// Use indexes for filtering
const sessions = await db
  .getAll('chatSessions')
  .where('userId', userId)
  .where('archived', false)
  .orderBy('updatedAt', 'desc')
  .limit(50)

// Avoid loading all messages at once
const recentMessages = await db
  .getAll('chatMessages')
  .where('sessionId', sessionId)
  .orderBy('createdAt', 'desc')
  .limit(50)

Memory Usage

Lazy Loading:
// Don't load all sessions into memory
const sessionList = await sessionStore.listSessions({
  limit: 20,
  offset: 0
})

// Load messages on demand
const messages = await sessionStore.loadSessionMessages(sessionId)
Message Streaming:
// Stream large message sets
async function* streamMessages(
  sessionId: string
): AsyncGenerator<ChatHistoryItem> {
  const batchSize = 50
  let offset = 0
  
  while (true) {
    const batch = await db
      .getAll('chatMessages')
      .where('sessionId', sessionId)
      .orderBy('createdAt', 'asc')
      .limit(batchSize)
      .offset(offset)
    
    if (batch.length === 0) break
    
    for (const message of batch) {
      yield message
    }
    
    if (batch.length < batchSize) break
    offset += batchSize
  }
}

Caching Strategy

interface CacheConfig {
  maxSize: number               // Max items in cache
  ttl: number                   // Time to live (ms)
  strategy: 'lru' | 'lfu'       // Eviction strategy
}

const cacheConfig: CacheConfig = {
  maxSize: 100,
  ttl: 3600000,                 // 1 hour
  strategy: 'lru'
}

// LRU cache for session data
const sessionCache = new LRUCache<string, ChatHistoryItem[]>({
  max: cacheConfig.maxSize,
  ttl: cacheConfig.ttl
})

function getCachedSession(sessionId: string): ChatHistoryItem[] | undefined {
  return sessionCache.get(sessionId)
}

function cacheSession(sessionId: string, messages: ChatHistoryItem[]) {
  sessionCache.set(sessionId, messages)
}

Session Forking

Create alternate conversation branches:
// Fork at specific message
const forkedSessionId = await sessionStore.forkSession({
  fromSessionId: 'original-session',
  atIndex: 10,  // Fork from message #10
  reason: 'trying-different-approach'
})

// Continue conversation in fork
await chatOrchestrator.ingest(
  'What if we tried X instead?',
  options,
  forkedSessionId
)

Export & Backup

Export Sessions

interface ChatSessionsExport {
  version: string
  exportedAt: number
  sessions: Array<{
    meta: ChatSessionMeta
    messages: ChatHistoryItem[]
  }>
}

async function exportSessions(
  sessionIds?: string[]
): Promise<ChatSessionsExport> {
  const sessions = sessionIds
    ? await Promise.all(sessionIds.map(id => sessionStore.getSession(id)))
    : await sessionStore.getAllSessions()
  
  const data: ChatSessionsExport = {
    version: '1.0.0',
    exportedAt: Date.now(),
    sessions: await Promise.all(
      sessions.map(async meta => ({
        meta,
        messages: await sessionStore.getSessionMessages(meta.sessionId)
      }))
    )
  }
  
  return data
}

// Export to JSON
const exportData = await exportSessions()
const blob = new Blob(
  [JSON.stringify(exportData, null, 2)],
  { type: 'application/json' }
)
downloadBlob(blob, 'airi-conversations-export.json')

Import Sessions

async function importSessions(data: ChatSessionsExport) {
  for (const { meta, messages } of data.sessions) {
    // Create session
    const newSessionId = await sessionStore.createSession({
      title: meta.title,
      characterId: meta.characterId
    })
    
    // Import messages
    for (const message of messages) {
      await sessionStore.addMessage(newSessionId, message)
    }
  }
}

Cloud Sync (Future)

Planned cloud synchronization:
interface SyncConfig {
  enabled: boolean
  endpoint: string
  apiKey: string
  autoSync: boolean
  syncInterval: number          // ms
}

// Sync queue
interface SyncOperation {
  id: string
  type: 'create' | 'update' | 'delete'
  resource: 'session' | 'message'
  data: any
  timestamp: number
  retries: number
}

// Enqueue operation
function enqueueSyncOperation(op: SyncOperation) {
  storage.setItem(`outbox:${op.id}`, op)
}

// Process queue
async function processSyncQueue() {
  const keys = await storage.getKeys('outbox')
  
  for (const key of keys) {
    const op = await storage.getItem<SyncOperation>(key)
    if (!op) continue
    
    try {
      await syncToCloud(op)
      await storage.removeItem(key)
    } catch (error) {
      op.retries++
      if (op.retries > 3) {
        // Move to dead letter queue
        await storage.setItem(`failed:${op.id}`, op)
        await storage.removeItem(key)
      } else {
        await storage.setItem(key, op)
      }
    }
  }
}

Troubleshooting

Storage Quota Exceeded

Symptoms: “QuotaExceededError” when saving messages Solutions:
  1. Clear old sessions:
    await sessionStore.deleteOldSessions({ olderThan: 30 * 24 * 60 * 60 * 1000 })
    
  2. Reduce retention policy
  3. Export and delete archived sessions
  4. Check browser storage quota:
    if (navigator.storage && navigator.storage.estimate) {
      const estimate = await navigator.storage.estimate()
      console.log(`Used: ${estimate.usage} / ${estimate.quota}`)
    }
    

Slow Query Performance

Solutions:
  1. Add database indexes (see optimization section)
  2. Reduce message load count
  3. Use pagination for session lists
  4. Enable caching

Context Too Long

Error: Model context window exceeded Solutions:
  1. Enable automatic message pruning
  2. Reduce maxMessagesPerSession
  3. Enable message summarization
  4. Switch to model with larger context (Claude 3.5, Gemini 1.5)

Code Reference

Memory system implementation:
  • Session store: packages/stage-ui/src/stores/chat/session-store.ts
  • Context store: packages/stage-ui/src/stores/chat/context-store.ts
  • Chat orchestrator: packages/stage-ui/src/stores/chat.ts
  • Database repos: packages/stage-ui/src/database/repos/
  • Storage config: packages/stage-ui/src/database/storage.ts

Character Settings

Configure character behavior

Providers

Configure LLM providers

Build docs developers (and LLMs) love