The Problem: Context Loss Between AI Sessions
Every new AI coding session starts from scratch. Your assistant has no memory of:- What you worked on yesterday
- Bugs you’ve already fixed
- Project conventions you’ve established
- Code architecture decisions you’ve made
The Solution: LongMem
LongMem is a persistent memory daemon that runs locally on your machine. It records your coding activity and stores it in a private SQLite database, giving your AI assistant long-term memory across sessions.Never repeat yourself to your AI again. LongMem keeps a private, local memory of your coding work so your assistant always starts with context.
Key Features
Local-First
All data stays on your machine by default. No cloud uploads unless you explicitly enable compression.
Automatic Context
Your AI assistant automatically recalls past sessions without manual prompting.
Smart Privacy
Built-in secret redaction protects sensitive data like API keys and credentials.
Zero Manual Work
No manual note-taking. LongMem hooks into your AI assistant automatically.
What LongMem Stores
Stored locally in~/.longmem/memory.db:
- User prompts to the AI
- Commands executed by the AI
- Tool outputs (after redaction)
- File references and changes
How It Works
LongMem integrates seamlessly with Claude Code and OpenCode through hooks:- Your AI assistant runs commands - LongMem hooks capture the activity
- Events are sent to the daemon - Running locally on port 38741
- Data is stored in SQLite - Encrypted at rest, searchable, exportable
- AI queries memory via MCP - Automatic context retrieval in future sessions
Privacy Modes
LongMem offers three privacy levels:| Mode | Behavior |
|---|---|
safe | Redacts common secrets + blocks sensitive files (recommended) |
flexible | Safe mode + custom regex patterns for domain-specific secrets |
none | No redaction (for air-gapped or fully trusted environments) |
Optional Compression
Compression creates short summaries of past sessions using an LLM, improving recall and search relevance.- Without compression: LongMem works fully, but may return raw verbose logs
- With compression: Better semantic search, requires API key (OpenRouter, OpenAI, Anthropic, or local Ollama)
Next Steps
Installation
Install LongMem on macOS or Linux in 2 minutes
Quickstart
Get LongMem running in 5 minutes