Skip to main content
This page documents every configuration option available in Lerim’s TOML files. See the configuration overview to understand how the layered config system works.

Data directories

[data]

Controls where Lerim stores its data.
dir
string
default:"~/.lerim"
Global data directory path. Used for:
  • User config (config.toml)
  • Global memory store
  • Session database
  • Platform configuration
Example:
[data]
dir = "~/.lerim"

Memory settings

[memory]

Controls memory scope and storage behavior.
scope
string
default:"project_fallback_global"
Memory read/write scope. Determines where Lerim looks for and stores memories.Options:
  • project_fallback_global — Read from project first, fall back to global. Write to project. (recommended)
  • project_only — Read and write only in <repo>/.lerim/
  • global_only — Read and write only in ~/.lerim/
project_dir_name
string
default:".lerim"
Name of the project memory directory inside repositories.
Example:
[memory]
scope = "project_fallback_global"
project_dir_name = ".lerim"
Use project_fallback_global to keep project-specific memories in the repo while still having access to global learnings. Use project_only for strict project isolation.

[memory.decay]

Controls automatic confidence decay for memories.
enabled
boolean
default:"true"
Enable time-based memory decay. When enabled, memories that haven’t been accessed lose confidence over time.
decay_days
integer
default:"180"
Number of days of no access before full decay. Memories decay gradually from last access to this threshold.
min_confidence_floor
float
default:"0.1"
Minimum confidence multiplier. Decay never drops confidence below this value (0.0-1.0).
archive_threshold
float
default:"0.2"
Effective confidence threshold for archiving. Memories with confidence below this value become archive candidates during maintain (0.0-1.0).
recent_access_grace_days
integer
default:"30"
Grace period in days. Memories accessed within this window skip archiving even if confidence is below threshold.
Example:
[memory.decay]
enabled = true
decay_days = 180
min_confidence_floor = 0.1
archive_threshold = 0.2
recent_access_grace_days = 30
Memory decay keeps your knowledge store relevant by automatically reducing confidence for unused memories:
  1. Each memory tracks last_accessed timestamp
  2. Confidence decays linearly from last_accessed to last_accessed + decay_days
  3. Decay multiplier is clamped to min_confidence_floor (never goes below 10% by default)
  4. Effective confidence = base_confidence * decay_multiplier
  5. During maintain, memories with effective confidence below archive_threshold are archived
  6. Recently accessed memories (within recent_access_grace_days) are protected from archiving
This ensures that:
  • Frequently used knowledge stays strong
  • Stale information gradually fades
  • Critical decisions don’t disappear (floor prevents total decay)
  • Recent memories aren’t archived prematurely

Server settings

[server]

Controls the daemon server and sync/maintain intervals.
host
string
default:"127.0.0.1"
Server bind address. Use 127.0.0.1 for local-only access or 0.0.0.0 to allow network access.
port
integer
default:"8765"
Server port for HTTP API and dashboard.
sync_interval_minutes
integer
default:"10"
How often the sync (hot) path runs. Sync indexes new sessions and extracts memories.
maintain_interval_minutes
integer
default:"60"
How often the maintain (cold) path runs. Maintain merges duplicates, archives stale entries, and applies decay.
sync_window_days
integer
default:"7"
How many days back to scan for new sessions during sync.
sync_max_sessions
integer
default:"50"
Maximum number of sessions to process in a single sync run.
Example:
[server]
host = "127.0.0.1"
port = 8765
sync_interval_minutes = 10
maintain_interval_minutes = 60
sync_window_days = 7
sync_max_sessions = 50
The daemon runs two independent loops: sync (hot path, frequent) and maintain (cold path, less frequent). Adjust intervals based on how actively you use your coding agents.

Model roles

Lerim uses four model roles, each independently configurable. See the model roles guide for detailed explanation.

[roles.lead]

Orchestrates chat, sync, and maintain flows using PydanticAI.
provider
string
default:"openrouter"
Provider name: openrouter, openai, zai, anthropic, ollama
model
string
default:"x-ai/grok-4.1-fast"
Model identifier for the provider.
api_base
string
default:""
Custom API base URL. Overrides the provider default.
fallback_models
array
default:"[]"
List of fallback models to try if primary model fails.
timeout_seconds
integer
default:"300"
Request timeout in seconds.
max_iterations
integer
default:"10"
Maximum agent iterations per run.
openrouter_provider_order
array
default:"[]"
OpenRouter provider routing preference (e.g., ["Together", "Lepton"]).
Example:
[roles.lead]
provider = "openrouter"
model = "x-ai/grok-4.1-fast"
api_base = ""
fallback_models = []
timeout_seconds = 300
max_iterations = 10
openrouter_provider_order = []

[roles.explorer]

Read-only subagent for candidate gathering.
provider
string
default:"openrouter"
Provider name.
model
string
default:"x-ai/grok-4.1-fast"
Model identifier.
api_base
string
default:""
Custom API base URL.
fallback_models
array
default:"[]"
List of fallback models.
timeout_seconds
integer
default:"180"
Request timeout in seconds.
max_iterations
integer
default:"8"
Maximum agent iterations.
openrouter_provider_order
array
default:"[]"
OpenRouter provider routing preference.
Example:
[roles.explorer]
provider = "openrouter"
model = "x-ai/grok-4.1-fast"
api_base = ""
fallback_models = []
timeout_seconds = 180
max_iterations = 8
openrouter_provider_order = []

[roles.extract]

DSPy extraction pipeline for identifying decisions and learnings.
provider
string
default:"openrouter"
Provider name.
model
string
default:"openai/gpt-5-nano"
Model identifier.
api_base
string
default:""
Custom API base URL.
fallback_models
array
default:"[\"x-ai/grok-4.1-fast\"]"
List of fallback models.
timeout_seconds
integer
default:"180"
Request timeout in seconds.
max_window_tokens
integer
default:"300000"
Maximum tokens per transcript window. Increase for large-context models.
window_overlap_tokens
integer
default:"5000"
Token overlap between consecutive windows.
openrouter_provider_order
array
default:"[]"
OpenRouter provider routing preference.
Example:
[roles.extract]
provider = "openrouter"
model = "openai/gpt-5-nano"
api_base = ""
fallback_models = ["x-ai/grok-4.1-fast"]
timeout_seconds = 180
max_window_tokens = 300000
window_overlap_tokens = 5000
openrouter_provider_order = []
The max_window_tokens setting is critical for extraction quality. If your sessions are long, increase this value or use a model with a larger context window.

[roles.summarize]

DSPy summarization pipeline for session summaries.
provider
string
default:"openrouter"
Provider name.
model
string
default:"openai/gpt-5-nano"
Model identifier.
api_base
string
default:""
Custom API base URL.
fallback_models
array
default:"[\"x-ai/grok-4.1-fast\"]"
List of fallback models.
timeout_seconds
integer
default:"180"
Request timeout in seconds.
max_window_tokens
integer
default:"300000"
Maximum tokens per transcript window.
window_overlap_tokens
integer
default:"5000"
Token overlap between windows.
openrouter_provider_order
array
default:"[]"
OpenRouter provider routing preference.
Example:
[roles.summarize]
provider = "openrouter"
model = "openai/gpt-5-nano"
api_base = ""
fallback_models = ["x-ai/grok-4.1-fast"]
timeout_seconds = 180
max_window_tokens = 300000
window_overlap_tokens = 5000
openrouter_provider_order = []

Provider API bases

[providers]

Default API base URLs per provider. Per-role api_base settings take precedence.
zai
string
default:"https://api.z.ai/api/paas/v4"
ZAI API base URL.
openai
string
default:"https://api.openai.com/v1"
OpenAI API base URL.
openrouter
string
default:"https://openrouter.ai/api/v1"
OpenRouter API base URL.
ollama
string
default:"http://127.0.0.1:11434"
Ollama API base URL.
Example:
[providers]
zai = "https://api.z.ai/api/paas/v4"
openai = "https://api.openai.com/v1"
openrouter = "https://openrouter.ai/api/v1"
ollama = "http://127.0.0.1:11434"
You can point all roles at a custom endpoint by changing the provider base URL here, or override on a per-role basis using [roles.*.api_base].

Tracing

[tracing]

OpenTelemetry tracing settings. See the tracing guide for setup instructions.
enabled
boolean
default:"false"
Enable OpenTelemetry tracing. Can also be enabled with LERIM_TRACING=1.
include_httpx
boolean
default:"false"
Capture raw HTTP request/response bodies in traces.
include_content
boolean
default:"true"
Include prompt and completion text in trace spans.
Example:
[tracing]
enabled = false
include_httpx = false
include_content = true

Agents and projects

[agents]

Connected coding agent platforms. Written by lerim init and lerim connect. Example:
[agents]
claude = "~/.claude/projects"
codex = "~/.codex/sessions"
cursor = "~/Library/Application Support/Cursor/User/globalStorage"
opencode = "~/.local/share/opencode"
Each key is the agent name, and the value is the path to its session storage directory.

[projects]

Registered project paths. Written by lerim project add. Example:
[projects]
my-app = "~/codes/my-app"
backend = "~/work/backend"
frontend = "~/codes/frontend"
Each key is a short project name, and the value is the absolute path to the repository.
When running in Docker (lerim up), these paths determine the volume mounts. Adding or removing a project restarts the container to update mounts.

Complete example config

Here’s a complete example showing all sections:
# Lerim user configuration

[data]
dir = "~/.lerim"

[memory]
scope = "project_fallback_global"
project_dir_name = ".lerim"

[memory.decay]
enabled = true
decay_days = 180
min_confidence_floor = 0.1
archive_threshold = 0.2
recent_access_grace_days = 30

[server]
host = "127.0.0.1"
port = 8765
sync_interval_minutes = 10
maintain_interval_minutes = 60
sync_window_days = 7
sync_max_sessions = 50

[roles.lead]
provider = "openrouter"
model = "x-ai/grok-4.1-fast"
api_base = ""
fallback_models = []
timeout_seconds = 300
max_iterations = 10
openrouter_provider_order = []

[roles.explorer]
provider = "openrouter"
model = "x-ai/grok-4.1-fast"
api_base = ""
fallback_models = []
timeout_seconds = 180
max_iterations = 8
openrouter_provider_order = []

[roles.extract]
provider = "openrouter"
model = "openai/gpt-5-nano"
api_base = ""
fallback_models = ["x-ai/grok-4.1-fast"]
timeout_seconds = 180
max_window_tokens = 300000
window_overlap_tokens = 5000
openrouter_provider_order = []

[roles.summarize]
provider = "openrouter"
model = "openai/gpt-5-nano"
api_base = ""
fallback_models = ["x-ai/grok-4.1-fast"]
timeout_seconds = 180
max_window_tokens = 300000
window_overlap_tokens = 5000
openrouter_provider_order = []

[providers]
zai = "https://api.z.ai/api/paas/v4"
openai = "https://api.openai.com/v1"
openrouter = "https://openrouter.ai/api/v1"
ollama = "http://127.0.0.1:11434"

[tracing]
enabled = false
include_httpx = false
include_content = true

[agents]
claude = "~/.claude/projects"
codex = "~/.codex/sessions"
opencode = "~/.local/share/opencode"

[projects]
my-app = "~/codes/my-app"
backend = "~/work/backend"

Build docs developers (and LLMs) love