Skip to main content
1

Install LongMem

Run the interactive installer:
curl -fsSL https://github.com/clouitreee/LongMem/releases/latest/download/install.sh | bash
Accept the defaults (Safe privacy mode, auto-context enabled, no compression) or customize as needed.
See the Installation guide for detailed options and troubleshooting.
2

Start the Daemon

Launch the memory daemon:
longmem start
Expected output:
✓ Daemon started (PID: 12345)
✓ Listening on port 38741
The daemon runs in the background and persists across terminal sessions. On macOS/Linux with systemd or launchd, it will auto-start on boot.
3

Verify It's Running

Check the daemon status:
longmem status
Expected output:
✓ Daemon is running (PID: 12345)
✓ Health check: OK
✓ Port 38741: accessible
✓ Memory database: 128 KB (23 observations)
$ longmem status
 Daemon is running (PID: 12345)
 Health check: OK
 Port 38741: accessible
4

Use With Your AI Assistant

LongMem integrates automatically with Claude Code and OpenCode. No manual configuration needed.Start a new AI session:
# Claude Code
claude-code

# OR OpenCode
opencode
Your assistant now has access to three memory tools via MCP:
ToolDescription
mem_searchSearch past sessions by keyword or semantic meaning
mem_timelineGet chronological context from recent work
mem_getRetrieve full details of a specific observation
These tools are called automatically by your AI assistant. You don’t need to invoke them manually. Just ask natural questions like:
  • “What bug did I fix yesterday?”
  • “Show me the API changes from last week”
  • “What was the error message I got earlier?”

Testing Memory

To verify memory is working:
1

Do some work

In your AI assistant, run a few commands:
You: Create a file called test.txt with "Hello LongMem"
AI: [creates the file]
2

Check memory stats

Exit the AI session and check stats:
longmem stats
Expected output:
Memory Statistics
─────────────────
Total observations: 3
Prompts:            1
Commands:           1
Tool outputs:       1

Database size:      4.2 KB
Oldest entry:       2026-03-04 10:23:15
Newest entry:       2026-03-04 10:23:47
3

Query memory in a new session

Start a fresh AI session and ask:
You: What file did I create in the last session?
AI: [queries mem_timeline] You created a file called test.txt

Where Is My Data Stored?

All LongMem data lives in ~/.longmem/:
~/.longmem/
├── memory.db       # SQLite database with all observations
├── settings.json   # Configuration (privacy, compression, etc.)
└── logs/           # Daemon logs
Your data never leaves your machine unless you explicitly enable compression with a cloud provider (OpenRouter, OpenAI, Anthropic).

Useful Commands

longmem status

Configuration

Edit ~/.longmem/settings.json to customize behavior:
{
  "daemon": { "port": 38741 },
  "privacy": {
    "mode": "safe",
    "redactSecrets": true,
    "customPatterns": []
  },
  "autoContext": { "enabled": true },
  "compression": {
    "enabled": false,
    "provider": "openrouter",
    "model": "meta-llama/llama-3.1-8b-instruct",
    "apiKey": ""
  }
}
Restart the daemon after config changes:
longmem stop && longmem start

What Users Say

“I don’t have to repeat myself.”

“My assistant remembers yesterday’s bugs.”

“No cloud. My data stays local.”

Next Steps

Configuration

Customize privacy modes, compression, and advanced settings

Commands

Full CLI reference for all longmem commands

Build docs developers (and LLMs) love