Overview
LimbicLayer is Layer 2 of the Pulse subsystem. It maintains a registry of per-module LSTM models that learn, online, whether a window of SignalEvents is relevant enough to wake a specific module.
Each module gets its own independent ClusterModel that is initialized with cold-start weight biasing derived from the module’s ModuleFingerprint.
Import
from pulse.limbic import LimbicLayer, ClusterModel
LimbicLayer Class
Defined in pulse/limbic.py:63.
Constructor
def __init__(self) -> None
Creates an empty registry. Modules must be registered via register() before scoring.
Methods
register
def register(self, module_id: str, fingerprint: ModuleFingerprint) -> None
Create a ClusterModel for the module and apply cold-start weight biasing derived from fingerprint.slot_relevance_mask().
Relevant feature slots have their LSTM input weights scaled up; irrelevant slots have them scaled down so the model starts with a meaningful prior instead of random noise.
Unique identifier for the module.
fingerprint
ModuleFingerprint
required
Example:
from pulse.limbic import LimbicLayer
from pulse.fingerprint import parse_fingerprint
limbic = LimbicLayer()
fingerprint_raw = {
"module_id": "homework_watcher",
"cluster": "homework",
"version": "1.0.0",
"question_template": "Check {location}?",
"default_threshold": 0.7,
"signal_priors": {
"filesystem": {
"watch_directories": ["~/Documents/Homework"],
"relevant_extensions": [".pdf"],
"irrelevant_extensions": [],
},
},
}
fingerprint = parse_fingerprint(fingerprint_raw)
limbic.register("homework_watcher", fingerprint)
score
def score(self, module_id: str, window: list[SignalEvent]) -> float
Run inference on a window of SignalEvents and return a relevance score in [0.0, 1.0].
Returns 0.0 if the window is empty or the module is not registered.
The module whose model will score the window.
window
list[SignalEvent]
required
Sequence of events to evaluate. Converted to a (1, T, 16) tensor internally.
Returns:
float: Relevance score in [0.0, 1.0]
Example:
from pulse.retina import SignalEvent
event = SignalEvent(
source="filesystem",
location="/home/user/Documents/Homework/math.pdf",
delta_type="created",
magnitude=1.0,
timestamp=1678123456.0,
features={"extension": ".pdf", "size_bytes": 524288, "directory_depth": 4, "filename_tokens": ["math"]},
)
score = limbic.score("homework_watcher", [event])
print(f"Relevance score: {score:.2f}")
update_weights
def update_weights(
self,
module_id: str,
window: list[SignalEvent],
label: float,
) -> None
Perform a single online gradient step using BCELoss. This is how the model learns from feedback.
No-op if the window is empty or the module is not registered.
The module whose model will be updated.
window
list[SignalEvent]
required
The same event window that was previously scored.
Ground truth label in [0.0, 1.0], where 1.0 means the window was highly relevant.
Example:
# After the agent confirms the event was relevant:
limbic.update_weights("homework_watcher", [event], label=1.0)
# After the agent confirms the event was NOT relevant:
limbic.update_weights("homework_watcher", [event], label=0.0)
save
def save(self, path: Path) -> None
Persist all model weights and optimizer states to disk as a PyTorch checkpoint.
File path where the checkpoint will be written.
Example:
from pathlib import Path
limbic.save(Path("/var/pulse/limbic_models.pt"))
load
def load(self, path: Path) -> None
Restore model weights and optimizer states from disk.
Modules present in the checkpoint but not yet registered are re-created as fresh ClusterModel instances with restored state.
File path to the checkpoint created by save().
Example:
from pathlib import Path
limbic = LimbicLayer()
limbic.load(Path("/var/pulse/limbic_models.pt"))
# Models are now restored and ready for inference
score = limbic.score("homework_watcher", [event])
ClusterModel Class
Defined in pulse/limbic.py:22. This is the internal LSTM model used for scoring.
Architecture
class ClusterModel(nn.Module):
HIDDEN_SIZE: int = 64
- Input: (batch=1, window_len, FEATURE_DIM) float32 tensor
- LSTM: 1 layer, hidden size 64
- Output head: Linear(64, 1) + Sigmoid
- Output: Scalar float32 relevance score in [0.0, 1.0]
forward
def forward(self, x: torch.Tensor) -> torch.Tensor
Run inference on a batch of event windows.
Input tensor of shape (1, window_len, FEATURE_DIM).
Returns:
torch.Tensor: Scalar relevance score in [0.0, 1.0]
Example:
import torch
from pulse.limbic import ClusterModel
model = ClusterModel()
x = torch.randn(1, 5, 16) # Batch of 1, window of 5 events, 16 features
score = model(x)
print(score.item()) # e.g., 0.73
Cold-Start Weight Biasing
When a module is registered, LimbicLayer applies a cold-start bias to the LSTM input-to-hidden weights using the formula:
scale[i] = 0.1 + 1.9 * mask[i]
Where mask[i] comes from fingerprint.slot_relevance_mask():
mask = 0.0 → scale = 0.1 (nearly zeroed, irrelevant slot)
mask = 0.5 → scale = 1.05 (neutral)
mask = 1.0 → scale = 2.0 (doubled, highly relevant slot)
This ensures that on day one, the model already attends to the right features.
Online Learning Loop
from pulse.limbic import LimbicLayer
from pulse.retina import SignalEvent
limbic = LimbicLayer()
# ... register modules ...
# 1. Score an event window
event = SignalEvent(...)
score = limbic.score("homework_watcher", [event])
# 2. Present to agent (if score > threshold)
if score > 0.7:
user_feedback = ask_agent("Is this relevant?")
# 3. Update weights based on feedback
label = 1.0 if user_feedback == "yes" else 0.0
limbic.update_weights("homework_watcher", [event], label)
# 4. Model improves over time
See Also