Skip to main content

The Problem

AI coding agents are stateless. Every session starts from scratch — no memory of previous debugging sessions, architectural decisions, or bugfix context. Teams work around this with CLAUDE.md files and manual notes, but these are disconnected from the code graph and cannot be automatically surfaced when relevant. Development Memories solve this by storing structured knowledge directly in the symbol graph. When an agent builds a slice touching authenticate(), it automatically sees the memory: “Fixed race condition here — added mutex on session map.”
Agent Session 1                              Agent Session 2
─────────────                                ─────────────
"Fixed race condition in                     sdl.memory.surface
 authenticate() — added mutex"                    │
      │                                      ┌────┴─────┐
      ▼                                      │ Relevant  │
 sdl.memory.store                            │ memories  │
      │                                      │ surfaced  │
      ├──▶ Graph DB (Memory node)            └────┬─────┘
      │     ├── MEMORY_OF ──▶ authenticate()      │
      │     └── HAS_MEMORY ◀── Repo               ▼
      │                                      "Previous fix: race condition
      └──▶ .sdl-memory/bugfixes/a1b2c3.md     in authenticate() — mutex added"
           (YAML frontmatter + markdown)

How Memories Work

Every memory exists in two places simultaneously:
  1. Graph Database — a Memory node in LadybugDB with edges to Repo, Symbol, and File nodes. Enables fast querying, ranking, and automatic surfacing inside slices.
  2. Markdown Files.sdl-memory/<type>/<memoryId>.md with YAML frontmatter. These files can be committed to version control, shared across team members, and survive database rebuilds.
The graph is the primary store. Files are a durable backup and collaboration mechanism. During sdl.index.refresh, any .sdl-memory/ files on disk are imported into the graph automatically.

Graph Edges

EdgeFromToPurpose
HAS_MEMORYRepoMemoryRepository owns this memory
MEMORY_OFMemorySymbolMemory is about this symbol
MEMORY_OF_FILEMemoryFileMemory relates to this file

Memory Types

TypeDirectoryUse Case
decision.sdl-memory/decisions/Architectural decisions, design choices, “why we did it this way”
bugfix.sdl-memory/bugfixes/Bug context, root cause analysis, regression notes
task_context.sdl-memory/task_context/In-progress work context, handoff notes between sessions

Memory File Format

Each memory is stored as a markdown file with YAML frontmatter:
---
memoryId: a1b2c3d4e5f6g7h8
type: bugfix
title: Race condition in authenticate() session map
tags: [auth, concurrency, mutex]
confidence: 0.9
symbols: [sym_abc123, sym_def456]
files: [src/auth/authenticate.ts, src/auth/session-store.ts]
createdAt: 2026-03-15T10:30:00.000Z
deleted: false
---

The `authenticate()` function was reading and writing to the session map
without synchronization. Under concurrent requests, two sessions could
overwrite each other's tokens.

**Fix:** Added a mutex lock around the session map read-write block.
The lock is acquired before reading the current session state and released
after writing the new token. See commit abc1234.

**Root cause:** The session store was designed for single-threaded use
but started receiving concurrent calls after the connection pool was
introduced in v0.7.

Directory Structure

<repo-root>/
└── .sdl-memory/
    ├── decisions/
    │   └── a1b2c3d4e5f6g7h8.md
    ├── bugfixes/
    │   └── d4e5f6g7h8i9j0k1.md
    └── task_context/
        └── l2m3n4o5p6q7r8s9.md
Commit .sdl-memory/ to Git. When teammates pull and run sdl.index.refresh, memories are automatically imported into their local graph.

MCP Tools

sdl.memory.store

Store or update a memory with optional symbol and file links.
repoId
string
required
Repository ID.
type
string
required
Memory type: "decision", "bugfix", or "task_context".
title
string
required
Short title (1–120 characters).
content
string
required
Full memory content (1–50,000 characters).
tags
string[]
Up to 20 tags for filtering and cross-cutting queries.
confidence
number
Confidence score from 0.0–1.0 (default: 0.8). Use 0.9+ for verified facts, 0.5–0.7 for hypotheses.
symbolIds
string[]
Link to up to 100 symbols. Linked memories surface automatically when those symbols appear in slices.
fileRelPaths
string[]
Link to up to 100 files by relative path.
memoryId
string
If provided, updates the existing memory in-place and clears the stale flag.
// Store an architectural decision
{
  "repoId": "my-repo",
  "type": "decision",
  "title": "Use mutex for session store concurrency",
  "content": "After investigating the race condition in authenticate(), decided to use a simple mutex rather than a concurrent map. The mutex approach is simpler and the session store throughput doesn't justify lock-free data structures.\n\nAlternatives considered:\n- ConcurrentHashMap: overhead not justified for <100 concurrent sessions\n- Read-write lock: writes are frequent enough to negate read lock benefits",
  "tags": ["auth", "concurrency", "architecture"],
  "confidence": 0.95,
  "symbolIds": ["sym_authenticate_abc123", "sym_sessionStore_def456"]
}
Response:
{
  "ok": true,
  "memoryId": "a1b2c3d4e5f6g7h8",
  "created": true,
  "deduplicated": false
}
Deduplication: If a memory with the same content hash (SHA-256 of repoId + type + title + content) already exists, the call returns deduplicated: true with the existing memoryId rather than creating a duplicate.

sdl.memory.query

Search and filter memories with flexible criteria.
repoId
string
required
Repository ID.
query
string
Text search matched against the combined title and content. When semantic.retrieval.mode: "hybrid" is enabled, memories are also retrievable via the FTS index.
types
string[]
Filter by memory types.
tags
string[]
Filter by tags (OR logic — any match qualifies).
symbolIds
string[]
Filter to memories linked to these symbols via MEMORY_OF edges.
staleOnly
boolean
Return only stale memories (useful after refactors).
limit
number
Maximum results, 1–100 (default: 20).
sortBy
string
"recency" (default) or "confidence".
{
  "repoId": "my-repo",
  "query": "auth",
  "types": ["bugfix", "decision"],
  "sortBy": "confidence"
}

sdl.memory.remove

Soft-delete a memory from the graph with optional file cleanup.
repoId
string
required
Repository ID.
memoryId
string
required
ID of the memory to remove.
deleteFile
boolean
Delete the .sdl-memory/*.md file from disk (default: true). When false, the file is kept but its deleted frontmatter field is set to true, preserving history in version control while marking it inactive.
All edges (HAS_MEMORY, MEMORY_OF, MEMORY_OF_FILE) are removed and the Memory node is marked deleted: true (soft delete).

sdl.memory.surface

Explicitly surface relevant memories for a task context. Ranks memories by a composite score of confidence, recency, and symbol overlap.
repoId
string
required
Repository ID.
symbolIds
string[]
Up to 500 symbol IDs for context matching.
taskType
string
Filter by memory type ("decision", "bugfix", or "task_context").
limit
number
Maximum results, 1–50 (default: 10).
{
  "repoId": "my-repo",
  "symbolIds": ["sym_abc", "sym_def", "sym_ghi"],
  "limit": 5
}
Ranking algorithm:
score = confidence × recencyFactor × overlapFactor

recencyFactor = 1.0 / (1 + daysSinceCreation / 30)
    → 1.0 for today, 0.5 at 30 days, 0.25 at 90 days

overlapFactor = linkedSymbolCount / querySymbolCount
    → Fraction of query symbols this memory relates to
    → 1.0 when no symbolIds provided (repo-level memories always qualify)

Automatic Surfacing in Slices

When sdl.slice.build is called, memories are automatically surfaced alongside the response — no extra tool call required. How it works:
  1. After the slice is built, the system collects all symbolIds from the slice cards
  2. It queries for memories linked to those symbols plus any repo-level memories
  3. Memories are ranked using the confidence × recency × overlap algorithm
  4. The top N memories (default: 5) are embedded in the slice response as memories[]
{
  "cards": [ ... ],
  "memories": [
    {
      "memoryId": "a1b2c3d4e5f6g7h8",
      "type": "bugfix",
      "title": "Race condition in authenticate()",
      "content": "...",
      "confidence": 0.9,
      "stale": false,
      "linkedSymbols": ["sym_abc123"],
      "tags": ["auth", "concurrency"]
    }
  ]
}
Control parameters on sdl.slice.build:
FieldTypeDefaultDescription
includeMemoriesbooleantrueSet to false to disable memory surfacing
memoryLimitnumber5Max memories to include (0–20)
Memory surfacing is non-critical. If it fails, the slice is still returned successfully (without memories) and a warning is logged.

Staleness Detection

When sdl.index.refresh runs, memories linked to changed symbols are automatically flagged as stale. How it works:
  1. After indexing, the system identifies all symbolIds in changed files
  2. It queries for memories linked to those symbols via MEMORY_OF edges
  3. Each matching memory gets stale: true and staleVersion set to the current version ID
  4. Stale memories are still surfaced, but include the stale: true flag as a signal to review
What to do with stale memories:
  • Review — the linked code changed; does the memory still apply?
  • Update — call sdl.memory.store with the existing memoryId to update content and clear the stale flag
  • Remove — call sdl.memory.remove if the memory is no longer relevant

Team Sharing via Version Control

During sdl.index.refresh, the indexer scans the .sdl-memory/ directory and imports any files found:
  1. scanMemoryFiles(repoRoot) recursively finds all .md files under .sdl-memory/
  2. Each file is parsed (YAML frontmatter + markdown body)
  3. Files marked deleted: true in frontmatter are skipped
  4. For each valid file, a contentHash is computed and the memory is upserted into the graph
This means the knowledge-sharing workflow is:
1

Store a memory

Call sdl.memory.store to record a debugging insight, architectural decision, or task note. The tool writes both a graph node and a .sdl-memory/ markdown file.
2

Commit the file

Add .sdl-memory/ to your Git repository and commit the new file alongside your code changes.
3

Teammates pull

Other developers pull the changes, including the new memory file.
4

Next index refresh imports it

On their next sdl.index.refresh, the memory file is scanned and automatically upserted into their local graph.
File sync failures are non-critical — indexing continues even if memory import fails for individual files.

Best Practices

  1. Write memories when you learn something non-obvious — if a debugging session took 30 minutes, the root cause is worth a bugfix memory
  2. Link memories to specific symbols — unlinked memories only surface via repo-level queries; linked memories appear automatically in relevant slices
  3. Use tags consistently — tags enable cross-cutting queries like “all auth-related decisions”
  4. Review stale memories after refactors — query staleOnly: true and update or remove outdated knowledge
  5. Commit .sdl-memory/ to Git — shares knowledge across the team and survives database rebuilds
  6. Set confidence intentionally — 0.9+ for verified facts, 0.5–0.7 for hypotheses or temporary notes

Build docs developers (and LLMs) love