Hierarchical Signal Processing
Pulse’s three-layer architecture mirrors the hierarchical processing found in biological brains. Each layer has a specific responsibility and acts as a filter, ensuring that expensive computation only occurs when absolutely necessary.Each layer filters aggressively. The vast majority of environmental changes never reach the agent.
Layer 1: Retina
Responsibility
The Retina is a deterministic change detector. It watches the environment for deltas and emits structured events when something changes. It does not interpret, reason, or filter for relevance — it only detects.What It Watches
File System
New, modified, or deleted files in monitored directories using
watchdog library (inotify on Linux)Memory Namespaces
Facts written or updated in monitored namespaces via internal event hooks
Time Signals
Cyclical time features: hour of day, day of week, time since last activation (60-second tick)
Network (v2)
Hash comparison on monitored HTTP endpoints (optional, future version)
Signal Events
When the Retina detects a change, it emits aSignalEvent — a structured record containing:
- source:
filesystem,memory,time, ornetwork - location: file path, namespace, or endpoint
- delta_type:
created,modified,deleted, ortick - magnitude: 0.0–1.0, normalized change size
- timestamp: Unix timestamp
- features: source-specific metadata
Example: File System Event
Example: Time Tick Event
Time features use cyclical sine/cosine encoding to correctly represent circular time. In this encoding, 23:00 and 01:00 are mathematically close, not far apart.
Key Properties
- Stateless: The Retina has no memory. It doesn’t remember previous events — it only emits.
- Selective monitoring: Only watches directories declared in module fingerprints, not the entire filesystem.
- Threaded: Runs in its own thread, placing events onto a thread-safe queue for Layer 2.
- Always on: The time tick fires every 60 seconds regardless of other events. This is the Pulse’s heartbeat.
Layer 2: Limbic Filter
Responsibility
The Limbic Filter is the learning component of Pulse. For each registered module, it maintains a small neural network that takes a window of recentSignalEvent objects and outputs a relevance score (0.0–1.0).
A score above the threshold triggers Layer 3.
Architecture: Per-Module Models
Each module has its own independent LSTM (Long Short-Term Memory) neural network:- Input: Sliding window of last N
SignalEventfeature vectors, flattened and padded - Hidden size: 64 units
- Output: Single float (relevance score, 0.0–1.0) via sigmoid activation
- Parameters: ~50,000–200,000 (well under 1M)
- Inference time: Under 5ms on CPU
Why LSTM over transformers? LSTMs are designed for streaming time-series data, run efficiently on CPU, and handle variable-length sequences naturally. Transformers require fixed attention windows and are expensive for continuous inference.
Cold Start: Module Fingerprints
The challenge: a neural network is useless before it has training data. Pulse solves this with module fingerprints — JSON structures that modules provide at registration describing what signals are relevant to them.- Relevant features (e.g.,
.pdffiles in Downloads) → weights scaled up (2.0x) - Irrelevant features (e.g.,
.mp3files) → weights scaled down (0.1x) - Neutral features → weights unchanged (1.0x)
Online Learning
After each agent activation, Pulse receives a training label:- Implicit positive: Agent was activated and took action (wrote memory, ran a tool)
- Implicit negative: Agent was activated but did nothing
- Explicit override: User responds to “was this useful?” prompt (overrides implicit label)
- A student’s homework files tend to appear in Downloads between 8pm–11pm on weekdays
.pptxfiles in Documents on Monday mornings are usually relevant- Time ticks at 9am on Sundays often precede relevant memory namespace updates
Cluster Assignment
Modules are assigned to clusters based on their declaredcluster field in the fingerprint. Multiple modules can share a cluster:
homework-agentandnotes-agentboth belong to clusteracademicemail-agentandcalendar-agentboth belong to clustercommunication
Layer 3: Prefrontal Filter
Responsibility
The Prefrontal Filter takes aRelevanceScore above threshold and produces a ScopedQuestion — a specific, focused question for the agent.
This is the critical step that prevents the agent from waking with a blank slate.
How It Works
- Look up the cluster’s registered modules and their question templates
- Identify which module’s fingerprint best matches the triggering events (rule-based matching)
- Interpolate the question template with specific signal details
- Emit a
ScopedQuestionto the kernel signal bus
Example Flow
Template interpolation
Template:
"A new file appeared at {location}. Is this file related to a course assignment or homework?"Result: "A new file appeared at /home/user/Downloads/hw3.pdf. Is this file related to a course assignment or homework?"Fallback: Ambiguous Cases
If Layer 3 cannot identify a single best-matching module (two modules in the cluster match equally well), it forms a broader question:Gating Logic
Layer 3 applies final gating before escalation. It returnsshould_escalate=False if:
- Score is below threshold
- Question template is missing or empty
- Template substitution produces an empty string
- Template substitution raises any exception
Invariant:
should_escalate is True only when question is a non-empty string.Why Three Layers?
The three-layer separation is deliberate:- Retina (Layer 1) does the continuous, cheap work → always running
- Limbic (Layer 2) does the pattern recognition → only when deltas occur
- Prefrontal (Layer 3) does the question formation → only when patterns match
- Filesystem events for irrelevant file types
- Time ticks that don’t match learned patterns
- Memory updates in namespaces it doesn’t care about
Next Steps
Signal Perception
Learn how signals flow through all three layers
Module Registration
Write fingerprints and register your modules