The Core Problem
Most AI agent frameworks run everything in a single session. One LLM thread handles:- Conversation with the user
- Thinking and planning
- Tool execution
- Memory retrieval
- Context compaction
Five Process Types
Process Comparison
| Process | Type | Tools | Context | Lifecycle |
|---|---|---|---|---|
| Channel | LLM | reply, branch, spawn_worker, route, cancel, skip, react | Conversation history + compaction summaries + status block | Persistent |
| Branch | LLM | memory_recall, memory_save, channel_recall, spawn_worker | Clone of channel’s history at fork time | Short-lived |
| Worker | Pluggable | shell, file, exec, browser, set_status | Fresh prompt + task description | Fire-and-forget or interactive |
| Compactor | Programmatic | Monitor context, trigger workers | N/A | Persistent |
| Cortex | LLM + Programmatic | memory_recall, memory_save, system monitoring | Entire agent scope | Persistent |
Message Flow
Here’s how a typical user message flows through the system:User sends message
The message arrives via a messaging adapter (Discord, Slack, Telegram, etc.) and routes to the channel.
Channel receives it
The channel decides what to do. It doesn’t execute tasks directly or search memories itself.
Channel branches to think
The channel creates a branch — a fork of its full conversation context.The branch operates independently. The channel can respond to other messages while the branch thinks.
Branch recalls memories
The branch uses
memory_recall to search the memory graph.Hybrid search combines:- Vector similarity (embeddings via HNSW)
- Full-text search (Tantivy)
- Reciprocal Rank Fusion (RRF) for merging
- Graph traversal for related memories
Branch might spawn a worker
If the user’s request needs heavy lifting (code a feature, research a topic, browse the web), the branch spawns a worker.Workers get:
- A fresh prompt
- A specific task description
- Task-appropriate tools (shell, file, exec, browser)
- No channel context, no soul, no personality
Branch returns conclusion
The branch synthesizes its findings into a clean conclusion and returns it to the channel.The branch is then deleted.
Concurrent Operations
Channels never block. Multiple processes run simultaneously:Process Construction
Every LLM process is a RigAgent<SpacebotModel, SpacebotHook>. They differ in system prompt, tools, history, and hooks.
Channel
Vec<Message> stored in SQLite and passed on each call.
Branch
ToolServer with memory_save and memory_recall. This keeps memory tools off the channel’s tool list entirely.
Worker
Workers come in two flavors: Fire-and-forget — Does a job and returns a result. Summarization, file operations, one-shot tasks. Interactive — Long-running, accepts follow-up input from the channel. Coding sessions, multi-step tasks.Status Injection
Every turn, the channel gets a live status block injected into its context:set_status tool:
Context Management
The compactor watches each channel’s context size and triggers compaction before the channel fills up. See Context Compaction for details.Memory Bulletin
The cortex generates a memory bulletin — a periodically refreshed, LLM-curated summary of the agent’s knowledge. Every channel reads this on every turn viaArcSwap. See Cortex System for details.
Process Communication
Processes communicate via a broadcast channel:Tech Stack
| Layer | Technology |
|---|---|
| Language | Rust (edition 2024) |
| Async runtime | Tokio |
| LLM framework | Rig v0.30.0 |
| Relational data | SQLite (sqlx) |
| Vector + FTS | LanceDB |
| Key-value | redb |
| Embeddings | FastEmbed |