Skip to main content
Spacebot provides specialized tools to different process types. Tools are organized by function, not by consumer—each agent process gets exactly the tools it needs for its role.

Tool Server Topology

You configure which tools go to which processes via ToolServer factory functions. Each process type has its own tool set:

Channel Tools

User-facing conversation management

Branch Tools

Memory and research operations

Worker Tools

Task execution and file operations

Cortex Tools

System-level memory consolidation

Channel Tools

Channels get conversation management tools. These hold per-turn state and are added dynamically at the start of each conversation turn.

reply

Send a message to the user.
text
string
required
The message text to send
src/tools/reply.rs
pub struct ReplyTool {
    response_tx: mpsc::Sender<OutboundResponse>,
    conversation_id: String,
    conversation_logger: ConversationLogger,
    channel_id: ChannelId,
    replied_flag: RepliedFlag,
    agent_display_name: String,
}

branch

Fork the channel’s context to think or research independently.
task
string
required
What you need the branch to figure out
max_turns
integer
Maximum thinking iterations (default: 10)
src/agent/branch.rs
// Branch gets full channel history at fork time
let branch_history = channel_history.clone();

spawn_worker

Create an independent worker to execute a task.
task
string
required
The work to delegate
mode
string
Either fire_and_forget (default) or interactive
timeout_seconds
integer
Maximum execution time (1-3600 seconds)
{
  "task": "Run the test suite and report failures",
  "mode": "fire_and_forget",
  "timeout_seconds": 300
}
Worker returns a result and terminates.

route

Send follow-up input to an active interactive worker.
worker_id
string
required
The worker UUID
message
string
required
Follow-up instruction or question

cancel

Stop a running worker or branch.
process_id
string
required
Worker ID or branch ID to cancel

skip

Opt out of responding this turn (silent action).
{
  "reason": "User is thinking out loud, no response needed"
}

react

Add an emoji reaction instead of a text reply.
emoji
string
required
Emoji character or name (e.g., ”👍” or “thumbs_up”)

Branch Tools

Branches get memory and research tools. No channel context, just focused investigation.

memory_save

Write a structured memory to the graph.
content
string
required
The memory content
memory_type
string
required
One of: fact, preference, decision, identity, event, observation
importance
number
Decay resistance score (0.0-1.0, default: 0.5)
associations
array
Related memory IDs with relationship types
{
  "content": "User prefers dark mode for all applications",
  "memory_type": "preference",
  "importance": 0.8
}

memory_recall

Search the memory graph using hybrid vector + full-text search.
query
string
required
Search query
limit
integer
Max results to return (default: 20)
memory_types
array
Filter by types: ["fact", "decision"]
min_importance
number
Minimum importance threshold
src/memory/search.rs
// Hybrid search with RRF (Reciprocal Rank Fusion)
let results = memory_search.search(query, limit).await?;
// Vector similarity + full-text + graph traversal

memory_delete

Remove memories by ID or content match.
memory_ids
array
Specific memory IDs to delete
content_match
string
Delete memories containing this text

channel_recall

Retrieve conversation history from other channels.
channel_id
string
Specific channel ID
limit
integer
Number of recent messages (default: 50)

task_create

Create a structured task with optional subtasks.
title
string
required
Short task title
description
string
Detailed description
priority
string
One of: low, medium, high, urgent
subtasks
array
Checklist items as strings
status
string
Initial status: backlog, todo, in_progress, done, cancelled

task_list

Query tasks with filters.
status
string
Filter by status
priority
string
Filter by priority
limit
integer
Max results (default: 50)

task_update

Update task fields or subtasks.
task_number
integer
required
Task number to update
status
string
New status
priority
string
New priority
subtask_updates
array
Mark subtasks as completed

Worker Tools

Workers get file and execution tools. Fresh prompt per worker—no channel history.

shell

Execute shell commands in a sandboxed environment.
command
string
required
Shell command to run (via sh -c on Unix, cmd /C on Windows)
working_dir
string
Working directory (must be within workspace)
timeout_seconds
integer
Command timeout (1-300 seconds, default: 60)
{
  "command": "cargo test --all",
  "working_dir": "projects/api",
  "timeout_seconds": 180
}
Output includes exit code, stdout, stderr, and a formatted summary.

file

Read, write, or list files. All paths restricted to workspace.
{
  "operation": "read",
  "path": "src/main.rs"
}
Returns file content, truncated if over 50KB.
Security: Symlinks are blocked. Paths outside the workspace return ACCESS DENIED with instructions—never fabricate file contents.

exec

Run subprocesses with full argument and environment control.
program
string
required
Binary name (e.g., cargo, python, node)
args
array
Command arguments
working_dir
string
Working directory
env
array
Environment variables as {"key": "NAME", "value": "VALUE"}
timeout_seconds
integer
Execution timeout (1-300 seconds, default: 60)
{
  "program": "python",
  "args": ["train.py", "--epochs", "10"],
  "working_dir": "ml",
  "env": [{"key": "CUDA_VISIBLE_DEVICES", "value": "0"}],
  "timeout_seconds": 600
}
Dangerous env vars (LD_PRELOAD, PYTHONPATH, NODE_OPTIONS, etc.) are blocked for security.

set_status

Update the worker’s visible status.
status
string
required
Status message (e.g., “running tests”, “compiling”, “analyzing logs”)
{"status": "installing dependencies"}
Status appears in the channel’s status block and worker inspection output.

MCP Tools

Model Context Protocol tools are dynamically loaded from configured MCP servers. Available to workers only.
agent.toml
[[mcp]]
name = "filesystem"
enabled = true

[mcp.transport]
type = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
MCP tools appear in the worker’s tool list with the server name prefix. See MCP Integration for details.

Tool Output Limits

Tool outputs are truncated to 50KB to fit within LLM context windows. If you hit truncation:
  • Use head/tail for specific sections
  • Pipe commands through filters
  • Read files with offset/limit parameters
Truncated output includes a notice with the original size and instructions for accessing the rest.

Error-as-Result Pattern

Tool errors return as structured results, not exceptions. The LLM sees the error and can recover:
{
  "success": false,
  "error": "File not found: config.json",
  "path": "config.json"
}
This lets the agent try alternative approaches instead of crashing the process.

Adding Custom Tools

Implement the rig::tool::Tool trait:
use rig::tool::Tool;
use rig::completion::ToolDefinition;

#[derive(Debug, Clone)]
pub struct CustomTool {
    // your fields
}

impl Tool for CustomTool {
    const NAME: &'static str = "custom";
    type Error = CustomError;
    type Args = CustomArgs;
    type Output = CustomOutput;

    async fn definition(&self, _prompt: String) -> ToolDefinition {
        // JSON Schema for args
    }

    async fn call(&self, args: Self::Args) -> Result<Self::Output, Self::Error> {
        // implementation
    }
}
Register in the appropriate factory function (create_worker_tool_server, create_branch_tool_server, etc.).

Build docs developers (and LLMs) love