Salience
Every record has asalience field in the range [0.0, 1.0] (conceptually unbounded, but capped at 1.0 by reinforcement). Salience represents the current importance of a record. Higher-salience records are ranked first in retrieval results.
- New records start with
salience = 1.0. - Salience decreases over time via the decay function.
- Salience increases when a record is reinforced.
- Records with
salience < 0.001anddeletion_policy: auto_pruneare pruned automatically.
Exponential decay
Membrane uses exponential decay with a configurable half-life:elapsed is the number of seconds since last_reinforced_at.
MinSalience field acts as a floor: salience never decays below this value, preventing records from reaching zero unless intentionally retracted or penalized.
DecayProfile fields
HalfLifeSeconds for new records is 86400 (1 day).
Reinforce and Penalize
Two operations adjust salience explicitly:Reinforce
Boosts salience byReinforcementGain, capped at 1.0. Updates last_reinforced_at and adds a reinforce audit entry.
Penalize
Reduces salience by a specified amount, floored atMinSalience. Adds a decay audit entry.
Deletion policies
Thedeletion_policy field on each record controls how it may be deleted:
| Policy | Value | Behavior |
|---|---|---|
| Auto-prune | auto_prune | Deleted automatically when salience reaches floor (< 0.001) |
| Manual only | manual_only | Only deleted by explicit user action |
| Never | never | Deletion is prevented entirely |
pinned field on Lifecycle provides an additional safeguard: pinned records are never decayed or pruned regardless of their deletion policy.
Background decay scheduler
The decay scheduler runsApplyDecayAll at a configurable interval (default: 1h). After each decay sweep, it runs Prune to delete auto-prune records whose salience has reached the floor.
| Job | Default interval | Purpose |
|---|---|---|
| Decay | 1h | Applies time-based salience decay using the exponential curve |
| Pruning | With decay | Deletes records with auto_prune policy whose salience has reached 0 |
Consolidation pipeline
Consolidation promotes raw episodic experience into durable knowledge. The pipeline runs every6h by default and consists of four stages:
Episodic compression
Reduces salience of episodic records that have exceeded their age threshold, making room for new experience.
Structural semantic consolidation
Scans episodic records with successful outcomes. For each timeline event with a summary, creates a new semantic record (subject: event kind, predicate:
observed_in, object: summary) or reinforces an existing one.LLM-backed semantic extraction (Tier 4 only)
When
llm_endpoint is configured (Postgres + LLM tier), sends episodic summaries to an OpenAI-compatible chat completions API. The LLM extracts structured subject-predicate-object triples that are stored as semantic records.Competence extraction
Groups successful episodic records by their tool signature. Patterns that appear at least twice are promoted into competence records with a recipe derived from the tool sequence.
ConsolidationResult fields
LLM-backed semantic extraction
On the Postgres + LLM tier (Tier 4), episodic records can be converted into typed semantic facts asynchronously. The extractor sends episodic content to an OpenAI-compatible endpoint:created_by: "consolidation/semantic-extractor".
Configure the LLM endpoint in config.yaml:
Background consolidation scheduler
| Job | Default interval | Purpose |
|---|---|---|
| Consolidation | 6h | Runs all five pipeline stages |
Consolidation is automatic and requires no user approval per RFC 15B. All promoted knowledge remains subject to decay and can be revised through explicit operations.