Skip to main content
Channels are the user-facing LLM process. One channel per conversation (Discord thread, Slack channel, Telegram DM, etc.). Channels talk to users. They delegate everything else.

What Channels Do

A channel:
  • Receives messages from users
  • Maintains conversation personality and context
  • Responds to simple questions directly
  • Delegates complex work to branches and workers
  • Routes follow-up messages to active workers
  • Displays live status of ongoing work
A channel does NOT:
  • Execute tasks directly
  • Search memories itself
  • Do heavy tool work
  • Wait for branches or workers to complete
  • Block on compaction
The channel is always responsive — never blocked by work, never frozen by compaction.

Tools Available to Channels

Channels have a focused set of tools for managing user interaction:
ToolPurpose
replySend a message to the user
branchFork context and think independently
spawn_workerCreate a worker to do a task
routeSend follow-up to an active worker
cancelCancel a worker or branch
skipOpt out of responding
reactAdd emoji reaction to a message
Channels do not have:
  • memory_recall — delegated to branches
  • memory_save — delegated to branches
  • shell, file, exec, browser — delegated to workers

Channel Context

Every turn, the channel’s context includes:
1

System prompt

Loaded from prompts/CHANNEL.md. Contains personality, identity, and behavioral guidelines.
2

Identity files

  • SOUL.md — agent personality and core values
  • IDENTITY.md — agent capabilities and knowledge domains
  • USER.md — information about the user
3

Memory bulletin

A periodically refreshed summary of the agent’s knowledge, generated by the cortex.
// From src/agent/cortex.rs
let bulletin = runtime_config.memory_bulletin.load();
Every channel reads this via ArcSwap — no database queries on the hot path.
4

Status block

Live status of active branches, workers, and recently completed work.
// From src/agent/status.rs
pub struct StatusBlock {
    pub active_branches: Vec<BranchStatus>,
    pub active_workers: Vec<WorkerStatus>,
    pub completed_items: Vec<CompletedItem>,
}
5

Conversation history

Persistent message history stored in SQLite, loaded and passed to Rig on each turn.
// From src/agent/channel.rs
let response = agent.prompt(&user_message)
    .with_history(&mut history)
    .await?;
6

Compaction summaries

When context fills up, old messages are summarized and replaced with summaries. These stack chronologically at the top of history.

Branching for Thinking

When a channel needs to think, search memories, or make a decision, it branches:
// From src/agent/channel.rs
let branch_history = channel_history.clone();

let branch = Branch::new(
    channel_id.clone(),
    description,
    deps.clone(),
    system_prompt,
    branch_history,
    tool_server,
    max_turns,
);

tokio::spawn(async move {
    let conclusion = branch.run(prompt).await?;
    // Send conclusion back to channel via event
});
The branch gets a clone of the channel’s full conversation history. Same context, same understanding. It operates independently — the channel can respond to other messages while the branch thinks.
Creating a branch is literally channel_history.clone(). Branches are git branches for conversations.

Concurrent Branches

Multiple branches can run concurrently per channel:
User A: "what do you know about X?"
    → Channel branches (branch-1)

User B: "hey, can you help with Y?"
    → Channel branches (branch-2)

User C: "how's it going?"
    → Channel responds directly: "Going well! Thinking about X and Y for A and B."

Branch-1 resolves: "Here's what I found about X..."
    → Channel incorporates result

Branch-2 resolves: "For Y, you should..."
    → Channel incorporates result

Next user message triggers channel to synthesize both results
First done, first incorporated. Configurable limit (default 3 concurrent branches).

Spawning Workers

When a channel (or branch) needs heavy lifting done, it spawns a worker:
{
  "name": "spawn_worker",
  "input": {
    "task": "Refactor the authentication module to use dependency injection",
    "worker_type": "interactive",
    "notify_on_complete": true
  }
}
Workers get:
  • A fresh prompt (no channel context)
  • Task-appropriate tools (shell, file, exec, browser)
  • Model routing based on task type (coding workers get stronger models)
See Multi-Agent System for worker details.

Routing to Workers

Interactive workers accept follow-up input:
User: "refactor the auth module"
    → Branch spawns interactive coding worker
    → Branch returns: "Started a coding session for the auth refactor"

User: "actually, update the tests too"
    → Channel routes message to active worker
    → Worker receives follow-up, continues with its existing context
The channel uses the route tool:
{
  "name": "route",
  "input": {
    "worker_id": "550e8400-e29b-41d4-a716-446655440000",
    "message": "actually, update the tests too"
  }
}

Status Awareness

Channels see live status updates from all their branches and workers:
// From src/agent/channel.rs
impl Channel {
    fn update_status_block(&mut self, event: &ProcessEvent) {
        self.status_block.update(event);
    }
}
Workers update their status via the set_status tool:
{
  "name": "set_status",
  "input": {
    "status": "analyzing codebase structure (45% complete)"
  }
}
The channel’s next turn includes this in its context, so it can tell the user what’s happening.

Message Coalescing

In fast-moving channels (Discord servers, Slack workspaces), messages arrive in rapid-fire bursts. Spacebot coalesces them:
User A: "hey"
User B: "what's up"
User C: "anyone know about X?"
[all within 500ms]

→ Channel receives one turn with all three messages
→ LLM picks the most interesting thing to engage with
→ Or stays quiet if there's nothing to add
Configurable debounce timing. DMs bypass coalescing automatically.

Retrigger Debouncing

When a branch or worker completes, the channel gets retriggered to incorporate the result. Rapid completions are debounced:
// From src/agent/channel.rs
const RETRIGGER_DEBOUNCE_MS: u64 = 500;
const MAX_RETRIGGERS_PER_TURN: usize = 3;
This prevents retrigger cascades where each retrigger spawns more work, which completes and retriggers again.

Temporal Context

Channels inject temporal context on every turn:
// From src/agent/channel.rs
struct TemporalContext {
    now_utc: DateTime<Utc>,
    timezone: TemporalTimezone,
}

impl TemporalContext {
    fn current_time_line(&self) -> String {
        format!(
            "{}; UTC {}",
            self.format_timestamp(self.now_utc),
            self.now_utc.format("%Y-%m-%d %H:%M:%S UTC")
        )
    }
}
Resolves timezone from:
  1. User’s configured timezone
  2. Agent’s cron timezone
  3. System local time
This powers scheduling, time-aware responses, and cron job coordination.

Lifecycle

Channels are persistent. They exist as long as the conversation exists:
1

Channel creation

When the first message arrives for a conversation, a channel spawns.
2

Message handling

Each user message triggers one channel turn (unless coalesced).
3

Branch/worker management

The channel spawns branches and workers as needed, tracks their state, and incorporates results.
4

Context compaction

The compactor monitors context size and triggers background compaction. The channel never blocks.
5

Conversation history

Every turn’s messages are persisted to SQLite asynchronously (tokio::spawn).
Channels shut down gracefully on agent stop or when the messaging platform disconnects.

Error Handling

Channels handle errors gracefully: LLM errors — Logged, user sees a friendly error message Branch errors — Branch returns partial result or error description, channel incorporates it Worker errors — Worker status updates to “failed”, result includes error details Context overflow — Compactor has already handled this (should never reach channel)

Configuration

Channels use process-type defaults from routing config:
[defaults.routing]
channel = "anthropic/claude-sonnet-4"

[defaults.routing.fallbacks]
"anthropic/claude-sonnet-4" = ["anthropic/claude-haiku-4.5"]
See Model Routing for details.

Max Turns

Channels run for a small number of turns per user message:
// From src/agent/channel.rs
agent.prompt(&user_message)
    .with_history(&mut history)
    .max_turns(5)  // Typically 1-3 turns in practice
    .await?
Most channel turns complete in 1-3 iterations. If the channel hits max turns, it likely delegated work and is waiting for results.
Channels should never need many turns. If a channel is hitting max_turns frequently, it’s trying to do work directly instead of delegating.

Next Steps

Branches

Learn how branches fork context and think independently

Workers

Understand how workers execute tasks

Status Block

See how channels track active work

Compaction

Explore how context is managed automatically

Build docs developers (and LLMs) love