Spacebot
An AI agent for teams, communities, and multi-user environments. Thinks, executes, and responds — concurrently, not sequentially. Never blocks. Never forgets.The Problem
Most AI agent frameworks run everything in a single session. One LLM thread handles conversation, thinking, tool execution, memory retrieval, and context compaction — all in one loop. When it’s doing work, it can’t talk to you. When it’s compacting, it goes dark. When it retrieves memories, raw results pollute the context with noise. Spacebot splits the monolith into specialized processes that only do one thing, and delegate everything else.Built for Teams and Communities
Most AI agents are built for one person in one conversation. Spacebot is built for many people working together — a Discord community with hundreds of active members, a Slack workspace with teams running parallel workstreams, a Telegram group coordinating across time zones.A single-threaded agent breaks the moment two people talk at once. Spacebot’s delegation model means it can think about User A’s question, execute a task for User B, and respond to User C’s small talk — all at the same time, without any of them waiting on each other.
Deploy Your Way
spacebot.sh
One-click hosted deploy. Connect your platforms, configure your agent, done.
Self-hosted
Single Rust binary. No Docker, no server dependencies, no microservices. Clone, build, run.
Docker
Container image with everything included. Mount a volume for persistent data.
How It Works
Five process types. Each does one job.Channels
The user-facing LLM process — the ambassador to the human. One per conversation (Discord thread, Slack channel, Telegram DM, etc). Has soul, identity, and personality. Talks to the user. Delegates everything else. A channel does not: execute tasks directly, search memories itself, or do any heavy tool work. It is always responsive — never blocked by work, never frozen by compaction. When it needs to think, it branches. When it needs work done, it spawns a worker.Branches
A fork of the channel’s context that goes off to think. Has the channel’s full conversation history — same context, same memories, same understanding. Operates independently. The channel never sees the working, only the conclusion.Workers
Independent processes that do jobs. Get a specific task, a focused system prompt, and task-appropriate tools. No channel context, no soul, no personality. Fire-and-forget — do a job and return a result. Summarization, file operations, one-shot tasks. Interactive — long-running, accept follow-up input from the channel. Coding sessions, multi-step tasks.The Compactor
Not an LLM process. A programmatic monitor per channel that watches context size and triggers compaction before the channel fills up.| Threshold | Action |
|---|---|
| >80% | Background compaction (summarize oldest 30%) |
| >85% | Aggressive compaction (summarize oldest 50%) |
| >95% | Emergency truncation (hard drop, no LLM) |
The Cortex
The agent’s inner monologue. The only process that sees across all channels, workers, and branches simultaneously. Generates a memory bulletin — a periodically refreshed, LLM-curated briefing of the agent’s knowledge injected into every conversation. Supervises running processes (kills hanging workers, cleans up stale branches). Maintains the memory graph (decay, pruning, merging near-duplicates, cross-channel consolidation).Capabilities
Task Execution
Workers come loaded with tools for real work:
- Shell — run arbitrary commands with configurable timeouts
- File — read, write, and list files with auto-created directories
- Exec — run specific programs with arguments and environment variables
- OpenCode — spawn a full coding agent as a persistent worker
- Browser — headless Chrome automation with accessibility-tree navigation
- Brave Search — web search with freshness filters and localization
Messaging
Native adapters for Discord, Slack, Telegram, Twitch, and Webchat:
- Message coalescing — rapid-fire messages batched into single LLM turn
- File attachments — send and receive files, images, and documents
- Rich messages — embeds/cards, interactive buttons, select menus
- Threading — automatic thread creation for long conversations
- Reactions — emoji reactions on messages
- Per-channel permissions — guild, channel, and DM-level access control
Memory System
Not markdown files. Not unstructured blocks in a vector database. Spacebot’s memory is a typed, graph-connected knowledge system:
- Eight memory types — Fact, Preference, Decision, Identity, Event, Observation, Goal, Todo
- Graph edges — RelatedTo, Updates, Contradicts, CausedBy, PartOf
- Hybrid recall — vector similarity + full-text search merged via Reciprocal Rank Fusion
- Memory import — dump files into
ingest/folder for automatic extraction - Cross-channel recall — branches can read transcripts from other conversations
Model Routing
Four-level routing system that picks the right model for every LLM call:
- Process-type defaults — channels get the best conversational model
- Task-type overrides — coding workers upgrade to stronger models
- Prompt complexity scoring — lightweight scorer downgrades simple requests
- Fallback chains — 429/502 errors automatically try next model
- Per-agent routing profiles — eco, balanced, or premium presets
Tech Stack
| Layer | Technology |
|---|---|
| Language | Rust (edition 2024) |
| Async runtime | Tokio |
| LLM framework | Rig v0.30 — agentic loop, tool execution, hooks |
| Relational data | SQLite (sqlx) — conversations, memory graph, cron jobs |
| Vector + FTS | LanceDB — embeddings (HNSW), full-text (Tantivy), hybrid search (RRF) |
| Key-value | redb — settings, encrypted secrets |
| Embeddings | FastEmbed — local embedding generation |
| Crypto | AES-256-GCM — secret encryption at rest |
Get Started
Quickstart
Get Spacebot running locally in under 5 minutes
Docker Deployment
Run Spacebot in a container with slim or full image variants
Why Rust
Spacebot isn’t a chatbot — it’s an orchestration layer for autonomous AI processes running concurrently, sharing memory, and delegating to each other. That’s infrastructure, and infrastructure should be machine code. Rust’s strict type system and compiler mean there’s one correct way to express something. When multiple AI processes share mutable state and spawn tasks without human oversight, “the compiler won’t let you do that” is a feature. The result is a single binary with no runtime dependencies, no garbage collector pauses, and predictable resource usage.License: FSL-1.1-ALv2 — Functional Source License, converting to Apache 2.0 after two years.