Skip to main content

What it does

Ingestion

Automatically ingests agent replies, tool calls, and observations into Membrane via after_agent_reply and after_tool_call hooks.

Memory search

Exposes the membrane_search tool so agents can query episodic memory with natural language.

Auto-context

Injects relevant memories into the agent’s context before each turn via the before_agent_start hook — no explicit tool calls required.

Status command

The /membrane command reports connection status and live memory stats.

Prerequisites

  • A running Membrane instance (membraned daemon)
  • OpenClaw v0.10+

Installation

1

Install from npm

In your OpenClaw extensions directory:
npm install @vainplex/openclaw-membrane
The brainplex init command auto-detects and configures all plugins.
2

Configure the plugin

Add the plugin entry to your openclaw.yaml under plugins.entries:
openclaw.yaml
plugins:
  entries:
    openclaw-membrane:
      enabled: true
      config:
        grpc_endpoint: "localhost:4222"
        default_sensitivity: "low"
        auto_context: true
        context_limit: 5
        min_salience: 0.3
        context_types: ["episodic", "semantic", "competence"]
3

Start the Membrane daemon

Make sure membraned is running and reachable at the configured grpc_endpoint:
./bin/membraned

Configuration reference

All options live under plugins.entries.openclaw-membrane.config in openclaw.yaml.
grpc_endpoint
string
Membrane gRPC address. Defaults to "localhost:4222".
default_sensitivity
string
Sensitivity level applied to all ingested events. One of "public", "low", "medium", "high", "hyper". Defaults to "low".
auto_context
boolean
When true, injects relevant memories into the agent’s context before each turn via the before_agent_start hook. Defaults to true.
context_limit
integer
Maximum number of memories to inject as context. Minimum 1. Defaults to 5.
min_salience
number
Minimum salience score (0–1) for retrieval during context injection and search. Defaults to 0.3.
context_types
array
Memory types to include in context injection. Valid values: "episodic", "working", "semantic", "competence", "plan_graph". Defaults to ["episodic", "semantic", "competence"].

Configuration summary table

OptionDefaultDescription
grpc_endpointlocalhost:4222Membrane gRPC address
default_sensitivitylowSensitivity for ingested events: public, low, medium, high, hyper
auto_contexttrueAuto-inject memories before each agent turn
context_limit5Max memories to inject
min_salience0.3Minimum salience score for retrieval
context_types["episodic", "semantic", "competence"]Memory types: episodic, working, semantic, competence, plan_graph

membrane_search tool

The plugin registers the membrane_search tool, which agents can call to query episodic memory:
membrane_search("what happened in yesterday's meeting", { limit: 10 })

Parameters

query
string
required
Natural language query to search memories.
limit
integer
Maximum results to return. Defaults to the configured context_limit (5).
memory_types
array
Filter by memory type: "episodic", "working", "semantic", "competence", "plan_graph".
min_salience
number
Minimum salience score (0–1). Defaults to the configured min_salience (0.3).
// Search with type filter
membrane_search("auth middleware patterns", {
  memory_types: ["competence", "semantic"],
  limit: 5
})

// Search with salience filter
membrane_search("recent deploy failures", {
  memory_types: ["episodic"],
  min_salience: 0.6,
  limit: 10
})

Auto-context

When auto_context: true (the default), the plugin hooks into before_agent_start to retrieve and inject relevant memories before each agent turn. Agents get awareness of past interactions without explicit tool calls. The injected context looks like:
Episodic memory from Membrane:
1. [episodic] Agent reply: Refactored the auth middleware to use...
2. [semantic] User prefers TypeScript for new services
3. [competence] To fix linker cache error: clear cache, rebuild with flags
Context injection uses the context_types and min_salience config values. Set auto_context: false to disable.
Increase context_limit for long-running sessions with deep history, or lower min_salience to surface less-reinforced memories.

/membrane command

Check connection status and memory stats at any time:
/membrane
→ Membrane: connected (localhost:4222) | 1,247 records | 3 memory types
If the daemon is unreachable, the command reports the disconnected state without crashing the agent.

Ingestion behavior

The plugin maps OpenClaw hooks to Membrane ingestion methods:
HookMembrane methodCondition
after_tool_callingestToolOutputWhen toolName is present
after_agent_replyingestEventAlways
Other hooksingestObservationFallback
Tags are automatically built from the event: hook:<name>, agent:<id>, tool:<name>, session:<key>. Ingestion failures are logged as warnings and do not interrupt the agent.

Architecture

OpenClaw Agent

     ├── after_agent_reply ──→ ingestEvent()
     ├── after_tool_call ────→ ingestToolOutput()
     ├── before_agent_start ─→ retrieve() → inject context

     └── membrane_search ───→ retrieve() → return results


                          Membrane (gRPC)
                          ┌─────────────┐
                          │  membraned   │
                          │  SQLCipher   │
                          │  Embeddings  │
                          └─────────────┘

Plugin metadata

FieldValue
Package@vainplex/openclaw-membrane
Plugin IDopenclaw-membrane
Version0.4.0
Kindmemory
Hooksafter_agent_reply, after_tool_call, before_agent_start
Toolsmembrane_search
Commands/membrane

Build docs developers (and LLMs) love