Skip to main content
Lerim Logo

Your coding agents forget everything after each session. Lerim remembers — across all of them.

Lerim is a continual learning layer that gives coding agents persistent, shared memory across sessions and platforms. Use Claude Code, Cursor, Codex, and OpenCode on the same project — Lerim unifies their knowledge into one memory store that every agent can query. Lerim network animation

The problem

You spend 20 minutes explaining context to your coding agent. It writes great code. Next session? It’s forgotten everything. Every decision, every pattern, every “we tried X and it didn’t work” — gone. And if you use multiple agents — Claude Code at the terminal, Cursor in the IDE, Codex for reviews — none of them know what the others learned. Your project knowledge is scattered across isolated sessions with no shared memory. This is agent context amnesia, and it’s the biggest productivity drain in AI-assisted development.

The solution

Lerim solves this by:
  • Watching your agent sessions across Claude Code, Codex CLI, Cursor, and OpenCode
  • Extracting decisions and learnings automatically using LLM pipelines
  • Storing everything as plain markdown files in your repo (.lerim/)
  • Refining memories continuously — merges duplicates, archives stale entries, applies time-based decay
  • Unifying knowledge across all your agents — what Cursor learns, Claude Code can recall
  • Answering questions about past context: lerim ask "why did we choose Postgres?"
No proprietary format. No database lock-in. Just markdown files that both humans and agents can read. Memories get smarter over time, not stale.

Get started

Quickstart

Get from zero to first working command in under 5 minutes

Installation

Detailed installation instructions and prerequisites

CLI reference

Complete command-line interface documentation

Architecture

How Lerim works under the hood

Key features

Multi-agent support

Works with Claude Code, Cursor, Codex CLI, and OpenCode

Plain markdown storage

No proprietary formats — just .md files in .lerim/

Automatic extraction

LLM pipelines extract decisions and learnings from sessions

Continuous refinement

Merges duplicates, archives stale entries, applies time decay

Natural language queries

Ask questions about past context in plain English

Local-first

Runs entirely on your machine with Docker or standalone

Supported agents

AgentSession FormatStatus
Claude CodeJSONL tracesSupported
Codex CLIJSONL tracesSupported
CursorSQLite to JSONLSupported
OpenCodeSQLite to JSONLSupported
More agents coming soon — PRs welcome! See the contributing guide to add support for your favorite agent.

How it works

Lerim is file-first and primitive-first:
  • Primitive folders: decisions, learnings, summaries
  • Project memory first: <repo>/.lerim/
  • Global fallback memory: ~/.lerim/
  • Search default: files (no index required)
  • Orchestration runtime: pydantic-ai lead agent + read-only explorer subagent
  • Extraction/summarization: dspy.ChainOfThought with transcript windowing, role-configured models
This keeps memory readable by humans and easy for agents to traverse.

Sync path

Sync path diagram The sync path processes new agent sessions: reads transcript archives, extracts decision and learning candidates via DSPy, deduplicates against existing memories, and writes new primitives to the memory folder.

Maintain path

Maintain path diagram The maintain path runs offline refinement over stored memories: merges duplicates, archives low-value entries, consolidates related memories, and applies time-based decay to keep the memory store clean and relevant.

Dashboard

Lerim includes a local web UI for session analytics, memory browsing, and runtime status. Lerim dashboard Access it at http://localhost:8765 after running lerim up or lerim serve.

Dashboard features

  • Overview: High-level metrics and charts for sessions, messages, tools, errors, and tokens
  • Runs: Searchable session list with full-screen chat viewer
  • Memories: Library and editor for memory records with filters
  • Pipeline: Sync/maintain status and extraction queue state
  • Settings: Dashboard-editable config for server, model roles, and tracing

Next steps

1

Install Lerim

Follow the installation guide to set up Python, Docker, and API keys
2

Quick start

Complete the quickstart guide to get up and running in 5 minutes
3

Connect your agents

Learn how to connect your coding agents in the connecting agents guide
4

Explore the CLI

Master all commands in the CLI reference
If Lerim saves you from re-explaining context to your agent, give it a ⭐ on GitHub

Build docs developers (and LLMs) love