Overview
Memory systems allow agents to recall information from past conversations using semantic search. This example demonstrates:- Storing session data to long-term memory
- Using vector embeddings for semantic search
- Querying memories across different sessions
- Automatically recalling relevant context
Unlike session state (which is tied to a single conversation), memory systems enable agents to remember information across multiple sessions and users.
Personal Assistant Example
We’ll build an agent that:- Session 1: Learns information about a user
- Stores memories: Converts conversation to searchable vectors
- Session 2: Recalls information using semantic search
Complete Example
import { join } from "node:path";
import {
AgentBuilder,
FileVectorStore,
InMemorySessionService,
LlmSummaryProvider,
MemoryService,
OpenAIEmbeddingProvider,
RecallMemoryTool,
VectorStorageProvider,
} from "@iqai/adk";
export async function getRootAgent() {
const sessionService = new InMemorySessionService();
// Configure vector storage for memories
const vectorStore = new FileVectorStore({
basePath: join(process.cwd(), "data", "memories"),
writeSummaries: true,
format: "json",
});
// Set up memory service with embeddings
const memoryService = new MemoryService({
storage: new VectorStorageProvider({
vectorStore,
searchMode: "vector",
}),
summaryProvider: new LlmSummaryProvider({
model: "gpt-4o-mini",
}),
embeddingProvider: new OpenAIEmbeddingProvider({
model: "text-embedding-3-small",
}),
});
return AgentBuilder.withModel(process.env.LLM_MODEL || "gemini-2.5-flash")
.withInstruction(
"You are a helpful assistant. When asked about previous conversations or user preferences, use the recall_memory tool to search your memory. Only use tools that are provided to you.",
)
.withTools(new RecallMemoryTool())
.withSessionService(sessionService)
.withMemory(memoryService)
.build();
}
The
RecallMemoryTool is automatically provided when you configure a memory service. The agent learns to use it when it needs to recall past information.import { ask } from "../utils";
import { getRootAgent } from "./agents/agent";
const SESSION_1_MESSAGES = [
"Hi! My name is Alex and I'm a software engineer at a startup.",
"I have a pet African Grey parrot named Einstein. He can say over 50 words!",
"I'm allergic to shellfish, so I avoid seafood restaurants.",
"My favorite programming language is TypeScript, I love the type safety.",
"I'm planning a trip to Japan next spring to see the cherry blossoms.",
];
async function main() {
console.log("\n🧠 Memory System Example\n");
const { runner, sessionService, memoryService } = await getRootAgent();
// Session 1: Share information
console.log("── Session 1: Sharing Information ──\n");
for (const message of SESSION_1_MESSAGES) {
await ask(runner, message);
}
// Store to memory
const session1 = runner.getSession();
console.log("\n💾 Storing session to memory...");
const currentSession = await sessionService.getSession(
session1.appName,
session1.userId,
session1.id,
);
if (currentSession) {
await memoryService!.addSessionToMemory(currentSession);
}
// ... continue to Session 2
}
const SESSION_2_QUESTIONS = [
"What's my name and what do I do for work?",
"Do I have any pets? What kind?",
"Are there any foods I need to avoid?",
"What language do I prefer coding in?",
"Do you remember any travel plans I mentioned?",
];
async function main() {
// ... Session 1 code above ...
// Session 2: Test recall
const session2 = await sessionService.createSession(
session1.appName,
session1.userId,
);
runner.setSession(session2);
console.log("\n── Session 2: Testing Recall ──\n");
for (const question of SESSION_2_QUESTIONS) {
await ask(runner, question);
}
}
main().catch(console.error);
Expected Output
Key Concepts
Vector Embeddings
Conversations are converted to numerical vectors that capture semantic meaning:Memory Storage
Multiple storage backends are supported:Semantic Search
TheRecallMemoryTool performs semantic search automatically:
Memory Summarization
Conversations are summarized before storage to reduce token usage:Advanced Patterns
Search Modes
Configure how memories are retrieved:- vector: Semantic similarity search (default)
- keyword: Exact text matching
- hybrid: Combination of both
Custom Embedding Models
Use different embedding providers:Memory Filtering
Filter memories by user, app, or custom criteria:Production Considerations
Privacy & Data Management
Cost Optimization
Performance
Use Cases
Personal Assistants
Remember user preferences, history, and context
Customer Support
Recall past issues, preferences, and solutions
Knowledge Bases
Search documentation, FAQs, and guides
Research Agents
Accumulate and query research findings
Next Steps
Database Sessions
Combine memory with persistent sessions
Multi-Agent Systems
Share memories across multiple agents