Skip to main content

Memory

Memory services enable agents to store, retrieve, and utilize knowledge beyond individual conversations. This allows agents to build domain expertise, remember user preferences, and provide contextually relevant responses based on past interactions.

Memory Architecture

MemoryService

The MemoryService orchestrates storage, summarization, and search:
import { MemoryService, InMemoryStorageProvider } from "@iqai/adk";

const memoryService = new MemoryService({
  storage: new InMemoryStorageProvider(),
  searchLimit: 5, // Default number of results
});

Core Interface

class MemoryService {
  /**
   * Add a session to memory
   */
  async addSessionToMemory(
    session: Session,
    options?: {
      appName?: string;
      userId?: string;
    },
  ): Promise<MemoryRecord>;

  /**
   * Search memories for a user
   */
  async search(
    query: {
      query: string;
      userId: string;
      appName: string;
      limit?: number;
    },
  ): Promise<MemorySearchResult[]>;

  /**
   * Delete memories matching filter
   */
  async delete(filter: MemoryDeleteFilter): Promise<number>;

  /**
   * Count memories matching filter
   */
  async count(filter: MemoryDeleteFilter): Promise<number | undefined>;
}

Memory Record

A memory record represents stored knowledge:
interface MemoryRecord {
  /** Unique memory ID */
  id: string;
  
  /** Source session ID */
  sessionId: string;
  
  /** User ID */
  userId: string;
  
  /** App name */
  appName: string;
  
  /** Timestamp */
  timestamp: string;
  
  /** Memory content */
  content: MemoryContent;
  
  /** Embedding vector (if configured) */
  embedding?: number[];
}

interface MemoryContent {
  /** Overall summary */
  summary?: string;
  
  /** Segmented summaries */
  segments?: {
    topic: string;
    summary: string;
  }[];
  
  /** Key facts extracted */
  keyFacts?: string[];
  
  /** Entities mentioned */
  entities?: {
    name: string;
    type: string;
    relation?: string;
  }[];
  
  /** Raw conversation text */
  rawText?: string;
}

Storage Providers

InMemoryStorageProvider

For development and testing:
import { InMemoryStorageProvider } from "@iqai/adk";

const storage = new InMemoryStorageProvider();

const memoryService = new MemoryService({
  storage,
});
InMemoryStorageProvider loses all memories when the process restarts. Use persistent storage for production.

Custom Storage Providers

Implement MemoryStorageProvider for your database:
import type {
  MemoryStorageProvider,
  MemoryRecord,
  MemorySearchQuery,
  MemorySearchResult,
  MemoryDeleteFilter,
} from "@iqai/adk";

export class PostgresStorageProvider implements MemoryStorageProvider {
  async store(record: MemoryRecord): Promise<void> {
    await this.db.query(
      `INSERT INTO memories (id, session_id, user_id, app_name, 
       timestamp, content, embedding) VALUES ($1, $2, $3, $4, $5, $6, $7)`,
      [
        record.id,
        record.sessionId,
        record.userId,
        record.appName,
        record.timestamp,
        JSON.stringify(record.content),
        record.embedding,
      ]
    );
  }

  async search(
    query: MemorySearchQuery,
  ): Promise<MemorySearchResult[]> {
    if (query.queryEmbedding) {
      // Vector similarity search
      return await this.vectorSearch(query);
    } else {
      // Full-text search
      return await this.textSearch(query);
    }
  }

  async delete(filter: MemoryDeleteFilter): Promise<number> {
    const result = await this.db.query(
      `DELETE FROM memories WHERE user_id = $1 AND app_name = $2`,
      [filter.userId, filter.appName]
    );
    return result.rowCount;
  }

  async count(filter: MemoryDeleteFilter): Promise<number> {
    const result = await this.db.query(
      `SELECT COUNT(*) FROM memories WHERE user_id = $1 AND app_name = $2`,
      [filter.userId, filter.appName]
    );
    return parseInt(result.rows[0].count);
  }
}

Summary Providers

Summary providers transform sessions into structured memory content:
import type { MemorySummaryProvider, Session, MemoryContent } from "@iqai/adk";
import { LlmAgent } from "@iqai/adk";

export class LlmSummaryProvider implements MemorySummaryProvider {
  private agent: LlmAgent;

  constructor(config: { model: string }) {
    this.agent = new LlmAgent({
      name: "summarizer",
      model: config.model,
      instruction: `Summarize the conversation and extract:
        1. Main summary
        2. Key facts
        3. Important entities (people, places, concepts)
        Return as JSON.`,
    });
  }

  async summarize(session: Session): Promise<MemoryContent> {
    const conversation = session.events
      .map((e) => `${e.author}: ${e.text || ""}`)
      .join("\n");

    const summary = await this.agent.ask(conversation);
    
    try {
      return JSON.parse(summary);
    } catch {
      // Fallback to raw text
      return { rawText: conversation };
    }
  }
}

Using Summary Providers

const memoryService = new MemoryService({
  storage: new PostgresStorageProvider(),
  summaryProvider: new LlmSummaryProvider({ model: "gpt-4o-mini" }),
});

// Sessions are automatically summarized when added
await memoryService.addSessionToMemory(session);

Embedding Providers

Embedding providers enable semantic search:
import type { EmbeddingProvider } from "@iqai/adk";
import { OpenAI } from "openai";

export class OpenAIEmbeddingProvider implements EmbeddingProvider {
  private openai: OpenAI;

  constructor(apiKey?: string) {
    this.openai = new OpenAI({ apiKey });
  }

  async embed(text: string): Promise<number[]> {
    const response = await this.openai.embeddings.create({
      model: "text-embedding-3-small",
      input: text,
    });
    
    return response.data[0].embedding;
  }
}

Using Embedding Providers

const memoryService = new MemoryService({
  storage: new VectorStorageProvider(),
  embeddingProvider: new OpenAIEmbeddingProvider(),
});

// Searches use semantic similarity
const results = await memoryService.search({
  query: "What did we discuss about pricing?",
  userId: "user-123",
  appName: "my-app",
});
Combining summary and embedding providers gives you both structured knowledge extraction and semantic search capabilities.

Using Memory with Agents

Memory is automatically integrated into agent workflows:
import { LlmAgent, MemoryService } from "@iqai/adk";

const memoryService = new MemoryService({
  storage: new InMemoryStorageProvider(),
});

const agent = new LlmAgent({
  name: "assistant",
  model: "gpt-4o",
  memoryService, // Agent automatically queries memory during preprocessing
});

// Memory is searched before each response
// Relevant memories are injected into the context

Manual Memory Access

Access memory directly in tools or callbacks:
class ContextualTool extends BaseTool {
  async runAsync(args: Record<string, any>, context: ToolContext) {
    if (context.memoryService) {
      // Search memories
      const memories = await context.memoryService.search({
        query: args.searchQuery,
        userId: context.session.userId,
        appName: context.session.appName,
        limit: 3,
      });

      // Use memory results
      const relevantInfo = memories
        .map((m) => m.record.content.summary)
        .join("\n");

      return { context: relevantInfo };
    }
  }
}

Storing Memories

Add memories after important conversations:
import { Runner } from "@iqai/adk";

const runner = new Runner({
  appName: "my-app",
  agent,
  sessionService,
});

// Run conversation
for await (const event of runner.runAsync({
  userId: "user-123",
  sessionId: session.id,
  newMessage: { parts: [{ text: "Tell me about your premium plan" }] },
})) {
  // Process events
}

// Get updated session
const updatedSession = await sessionService.getSession(
  "my-app",
  "user-123",
  session.id
);

// Store in memory
await memoryService.addSessionToMemory(updatedSession);

Searching Memories

Retrieve relevant memories:
const results = await memoryService.search({
  query: "pricing and subscription plans",
  userId: "user-123",
  appName: "my-app",
  limit: 5,
});

for (const result of results) {
  console.log(`Score: ${result.score}`);
  console.log(`Summary: ${result.record.content.summary}`);
  console.log(`Facts: ${result.record.content.keyFacts?.join(", ")}`);
}

Search Results

interface MemorySearchResult {
  /** The memory record */
  record: MemoryRecord;
  
  /** Relevance score (0-1, higher is better) */
  score: number;
}

Managing Memories

Delete Memories

// Delete all memories for a user
await memoryService.delete({
  userId: "user-123",
  appName: "my-app",
});

// Delete specific session memories
await memoryService.delete({
  userId: "user-123",
  appName: "my-app",
  sessionId: "session-456",
});

Count Memories

const count = await memoryService.count({
  userId: "user-123",
  appName: "my-app",
});

console.log(`User has ${count} memories`);

Memory Lifecycle

Best Practices

  1. Selective Storage: Don’t store every session; focus on meaningful conversations.
  2. Quality Summaries: Invest in good summary providers to extract actionable knowledge.
  3. Vector Search: Use embeddings for semantic search to find contextually relevant memories.
  4. Privacy: Implement user data deletion to comply with privacy regulations.
  5. Deduplication: Avoid storing redundant information; merge similar memories.
  6. Metadata: Use memory metadata to filter and organize (e.g., by topic, importance).
  7. Performance: Index your storage by (userId, appName) for fast queries.

Production Example

Complete production setup:
import {
  MemoryService,
  LlmAgent,
  Runner,
} from "@iqai/adk";
import { PineconeStorageProvider } from "./storage/pinecone";
import { LlmSummaryProvider } from "./providers/summary";
import { OpenAIEmbeddingProvider } from "./providers/embeddings";

// Configure memory service
const memoryService = new MemoryService({
  storage: new PineconeStorageProvider({
    apiKey: process.env.PINECONE_API_KEY!,
    indexName: "agent-memory",
  }),
  summaryProvider: new LlmSummaryProvider({
    model: "gpt-4o-mini",
  }),
  embeddingProvider: new OpenAIEmbeddingProvider(
    process.env.OPENAI_API_KEY
  ),
  searchLimit: 5,
});

// Create agent with memory
const agent = new LlmAgent({
  name: "knowledgeable_assistant",
  model: "gpt-4o",
  memoryService,
  instruction: `You have access to conversation history via memory. 
    Use it to provide contextual, personalized responses.`,
});

// Run with automatic memory integration
const runner = new Runner({
  appName: "customer-support",
  agent,
  sessionService,
});
  • Sessions - Understand session management
  • Agents - Learn how agents use memory
  • Tools - Access memory from tools

Build docs developers (and LLMs) love