Skip to main content
Memory services enable agents to store and retrieve relevant information from past conversations. Unlike sessions which track active conversations, memory provides long-term context that persists indefinitely.

MemoryService

The main MemoryService class orchestrates three key components:

Storage

Where and how memories are persisted (in-memory, vector DB, files)

Summarization

How sessions are transformed into memorable content

Embeddings

How semantic search is enabled via vector embeddings

Architecture

// packages/adk/src/memory/memory-service.ts:44
export class MemoryService {
  private readonly storage: MemoryStorageProvider;
  private readonly summaryProvider?: MemorySummaryProvider;
  private readonly embeddingProvider?: EmbeddingProvider;
  
  constructor(config: MemoryServiceConfig) {
    this.storage = config.storage;
    this.summaryProvider = config.summaryProvider;
    this.embeddingProvider = config.embeddingProvider;
  }
}
The service follows a pipeline:

Configuration

import { MemoryService, InMemoryStorageProvider } from '@iqai/adk';

// Minimal configuration - no summarization or embeddings
const memoryService = new MemoryService({
  storage: new InMemoryStorageProvider(),
});

Storing Memories

Add a completed session to long-term memory:
1

End the Session

Complete the conversation and get the final session state:
const session = await sessionService.endSession(
  appName,
  userId,
  sessionId
);
2

Add to Memory

Store the session in long-term memory:
// packages/adk/src/memory/memory-service.ts:75
const memoryRecord = await memoryService.addSessionToMemory(session, {
  appName: 'my-app',  // Optional: override session.appName
  userId: 'user-123', // Optional: override session.userId
});

console.log('Memory ID:', memoryRecord.id);
console.log('Content:', memoryRecord.content);

What Gets Stored

The memory record contains:
interface MemoryRecord {
  id: string;              // Unique identifier
  sessionId: string;       // Original session ID
  userId: string;          // User who owns this memory
  appName: string;         // Application name
  timestamp: string;       // When memory was created
  content: MemoryContent;  // Summarized or raw content
  embedding?: number[];    // Vector for semantic search
}
If no summaryProvider is configured, the service stores minimal content with raw text extracted from the session events.

Searching Memories

Retrieve relevant memories using semantic or keyword search:
// packages/adk/src/memory/memory-service.ts:127
const results = await memoryService.search({
  query: 'What were my pizza preferences?',
  userId: 'user-123',
  appName: 'my-app',  // Optional
  limit: 5,            // Optional, defaults to configured searchLimit
});

for (const result of results) {
  console.log('Score:', result.score);
  console.log('Memory:', result.memory.content.summary);
  console.log('From session:', result.memory.sessionId);
}

Search Behavior

When embeddingProvider is configured:
  1. Query is converted to an embedding vector
  2. Storage performs vector similarity search
  3. Results ranked by cosine similarity
  4. Returns most semantically relevant memories
const results = await memoryService.search({
  query: 'vegetarian pizza with extra cheese',
  userId: 'user-123',
});
// Finds memories about dietary preferences, cheese, pizza
// even if exact words differ

Deleting Memories

Remove memories using various filters:
// Delete all memories for a user
const deleted = await memoryService.delete({
  userId: 'user-123',
});

// Delete memories from a specific session
await memoryService.delete({
  userId: 'user-123',
  sessionId: 'session-456',
});

// Delete old memories (before a date)
await memoryService.delete({
  userId: 'user-123',
  before: '2024-01-01T00:00:00Z',
});

// Delete specific memory IDs
await memoryService.delete({
  ids: ['memory-1', 'memory-2'],
});

Storage Providers

Implement custom storage or use built-in providers:
Simple in-memory storage for development:
// packages/adk/src/memory/storage/in-memory-storage-provider.ts:16
import { InMemoryStorageProvider } from '@iqai/adk';

const storage = new InMemoryStorageProvider();

const memory = new MemoryService({ storage });
Features:
  • No persistence (lost on restart)
  • Keyword-based search
  • Fast and simple
  • Perfect for testing
All memories are lost when the process exits. Use for development only.

Embedding Providers

Enable semantic search with vector embeddings:
import { OpenAIEmbeddingProvider } from '@iqai/adk';

const embeddings = new OpenAIEmbeddingProvider({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'text-embedding-3-small', // or text-embedding-3-large
});

console.log('Dimensions:', embeddings.dimensions); // 1536

Summary Providers

Transform sessions into memorable content:
Use an LLM to generate structured summaries:
import { LlmSummaryProvider } from '@iqai/adk';

const summary = new LlmSummaryProvider({
  model: 'gpt-4o-mini',
  // Custom prompt to guide summarization
  prompt: `Summarize this conversation focusing on:
    - User preferences and interests
    - Important decisions or commitments
    - Key facts to remember
    
    Format as structured JSON with summary, keyFacts, and entities.`,
});

Memory Content Structure

Flexible schema for storing memory content:
type MemoryContent = {
  // Human-readable summary
  summary?: string;
  
  // Topic segments for granular search
  segments?: TopicSegment[];
  
  // Named entities mentioned
  entities?: Entity[];
  
  // Key facts to remember
  keyFacts?: string[];
  
  // Raw text (if no summarization)
  rawText?: string;
  
  // Custom fields
  [key: string]: unknown;
};

interface TopicSegment {
  topic: string;          // "Pizza preferences"
  summary: string;        // "User prefers vegetarian with extra cheese"
  relevance?: 'high' | 'medium' | 'low';
}

interface Entity {
  name: string;           // "Mario's Pizza"
  type: 'person' | 'place' | 'organization' | 'thing' | 'other';
  relation?: string;      // "favorite restaurant"
}

Complete Example

import {
  AgentBuilder,
  MemoryService,
  VectorStorageProvider,
  QdrantVectorStore,
  OpenAIEmbeddingProvider,
  LlmSummaryProvider,
  createDatabaseSessionService,
} from '@iqai/adk';

// Configure memory service
const memoryService = new MemoryService({
  storage: new VectorStorageProvider({
    vectorStore: new QdrantVectorStore({
      url: process.env.QDRANT_URL,
      collectionName: 'agent-memories',
    }),
  }),
  embeddingProvider: new OpenAIEmbeddingProvider(),
  summaryProvider: new LlmSummaryProvider({
    model: 'gpt-4o-mini',
  }),
  searchLimit: 10,
});

// Configure session service
const sessionService = createDatabaseSessionService(
  process.env.DATABASE_URL
);

// Create agent with memory
const { runner } = await AgentBuilder
  .withModel('gpt-4')
  .withInstruction('You are a helpful assistant with long-term memory.')
  .withSessionService(sessionService)
  .withMemoryService(memoryService)
  .build();

// Have a conversation
const session1 = await runner.ask({
  prompt: 'I love vegetarian pizza with extra cheese',
  userId: 'user-123',
});

// End session and save to memory
const finalSession = await sessionService.endSession(
  session1.appName,
  session1.userId,
  session1.id
);

await memoryService.addSessionToMemory(finalSession);

// Later, in a new conversation
const session2 = await runner.ask({
  prompt: 'What kind of pizza do I like?',
  userId: 'user-123',
});

// The agent can search its memory
const memories = await memoryService.search({
  query: 'pizza preferences',
  userId: 'user-123',
});

console.log('Agent remembers:', memories[0]?.memory.content.summary);

Best Practices

  • After meaningful conversations (not every message)
  • When user shares important information
  • At natural conversation boundaries
  • When session explicitly ends
Don’t store:
  • Partial or incomplete conversations
  • Test or development interactions
  • Sensitive information without consent
  • Use semantic search (embeddings) for natural queries
  • Combine with filters for precision
  • Adjust searchLimit based on context needs
  • Consider recency in ranking (recent memories may be more relevant)
  • Implement retention policies (delete old memories)
  • Monitor storage size and costs
  • Use count() to check memory usage
  • Consider user privacy and data deletion rights
  • Use batch operations when possible
  • Cache frequently accessed memories
  • Implement pagination for large result sets
  • Choose appropriate vector store for scale
Memory services are optional. Many agents work well with just session management. Add memory when you need cross-session context.

Build docs developers (and LLMs) love