User-facing conversation processes that delegate everything
Channels are the user-facing LLM process. One channel per conversation (Discord thread, Slack channel, Telegram DM, etc.). Channels talk to users. They delegate everything else.
When a channel needs to think, search memories, or make a decision, it branches:
// From src/agent/channel.rslet branch_history = channel_history.clone();let branch = Branch::new( channel_id.clone(), description, deps.clone(), system_prompt, branch_history, tool_server, max_turns,);tokio::spawn(async move { let conclusion = branch.run(prompt).await?; // Send conclusion back to channel via event});
The branch gets a clone of the channel’s full conversation history. Same context, same understanding. It operates independently — the channel can respond to other messages while the branch thinks.
Creating a branch is literally channel_history.clone(). Branches are git branches for conversations.
Multiple branches can run concurrently per channel:
User A: "what do you know about X?" → Channel branches (branch-1)User B: "hey, can you help with Y?" → Channel branches (branch-2)User C: "how's it going?" → Channel responds directly: "Going well! Thinking about X and Y for A and B."Branch-1 resolves: "Here's what I found about X..." → Channel incorporates resultBranch-2 resolves: "For Y, you should..." → Channel incorporates resultNext user message triggers channel to synthesize both results
First done, first incorporated. Configurable limit (default 3 concurrent branches).
User: "refactor the auth module" → Branch spawns interactive coding worker → Branch returns: "Started a coding session for the auth refactor"User: "actually, update the tests too" → Channel routes message to active worker → Worker receives follow-up, continues with its existing context
In fast-moving channels (Discord servers, Slack workspaces), messages arrive in rapid-fire bursts. Spacebot coalesces them:
User A: "hey"User B: "what's up"User C: "anyone know about X?"[all within 500ms]→ Channel receives one turn with all three messages→ LLM picks the most interesting thing to engage with→ Or stays quiet if there's nothing to add
Channels handle errors gracefully:LLM errors — Logged, user sees a friendly error messageBranch errors — Branch returns partial result or error description, channel incorporates itWorker errors — Worker status updates to “failed”, result includes error detailsContext overflow — Compactor has already handled this (should never reach channel)