Configuration Reference
Yourconfig.toml file defines LLM providers, agent settings, messaging platform credentials, and bindings that route conversations to specific agents. Every section supports environment variable references via env:VAR_NAME.
File Location
Spacebot searches forconfig.toml in:
- Path specified by
--configflag - Current working directory
~/.spacebot/config.toml$SPACEBOT_DIR/config.toml(ifSPACEBOT_DIRis set)
SPACEBOT_CONFIG_PATH environment variable to specify the config path.
LLM Providers
Configure API keys for LLM providers. All keys are optional — configure only the providers you plan to use.Anthropic API key for Claude models. Supports
env:VAR_NAME references.OpenAI API key for GPT models. Supports
env:VAR_NAME references.OpenRouter API key for multi-provider access. Supports
env:VAR_NAME references.Kilo Gateway API key. Supports
env:VAR_NAME references.Z.ai (GLM) API key for GLM models. Supports
env:VAR_NAME references.Groq API key for fast inference. Supports
env:VAR_NAME references.Together AI API key. Supports
env:VAR_NAME references.Fireworks AI API key. Supports
env:VAR_NAME references.DeepSeek API key. Supports
env:VAR_NAME references.xAI (Grok) API key. Supports
env:VAR_NAME references.Mistral AI API key. Supports
env:VAR_NAME references.Google Gemini API key. Supports
env:VAR_NAME references.Optional API key for Ollama instances that require authentication. Supports
env:VAR_NAME references.Base URL for local Ollama instance. Defaults to
http://localhost:11434.OpenCode Zen API key. Supports
env:VAR_NAME references.OpenCode Go API key. Supports
env:VAR_NAME references.NVIDIA API key for their model catalog. Supports
env:VAR_NAME references.MiniMax API key (international endpoint). Supports
env:VAR_NAME references.MiniMax API key (China endpoint). Supports
env:VAR_NAME references.Moonshot AI (Kimi) API key. Supports
env:VAR_NAME references.Z.ai Coding Plan API key for specialized coding models. Supports
env:VAR_NAME references.Custom Providers
Add any OpenAI-compatible or Anthropic-compatible endpoint:API compatibility mode. Options:
openai_completions— OpenAI/v1/chat/completionsAPIopenai_chat_completions— OpenAI-compatible/chat/completions(no/v1/prefix)kilo_gateway— Kilo Gateway API with required headersopenai_responses— OpenAI/v1/responsesAPIanthropic— Anthropic Messages APIgemini— Google Gemini API
Base URL for the API endpoint (without trailing
/v1/chat/completions).API key for authentication. Supports
env:VAR_NAME references.Optional display name for the provider.
Defaults
Defaults are inherited by all agents unless overridden in the agent-specific configuration.Routing
Model routing determines which LLM model handles each process type and task type.Default model for channel processes (user-facing conversations). Example:
anthropic/claude-sonnet-4Default model for worker processes (task execution). Example:
anthropic/claude-haiku-4.5Default model for branch processes (thinking/memory recall). Example:
anthropic/claude-sonnet-4Default model for compaction workers (context summarization). Example:
anthropic/claude-haiku-4.5Default model for cortex processes (memory bulletin, system observation). Example:
anthropic/claude-sonnet-4Map task types to specific models. Supported task types:
coding— code writing and refactoringsummarization— context compactionmemory_recall— memory search and curationmemory_save— memory extraction and storagebrowser— web browsing tasks
Enable prompt complexity scoring to downgrade simple requests to cheaper models.
Process types that use prompt complexity scoring. Example:
["channel", "branch"]Fallback chains for model failures (429, 502). Map primary model to array of fallback models.
Concurrency
Maximum number of branches that can run simultaneously per channel.
Maximum number of workers that can run simultaneously per agent.
Maximum LLM turns for channel processes before requiring user input.
Maximum LLM turns for branch and worker processes.
Context window size in tokens. Used for compaction threshold calculations.
Compaction
Context utilization percentage that triggers background compaction (summarize oldest 30%).
Context utilization percentage that triggers aggressive compaction (summarize oldest 50%).
Context utilization percentage that triggers emergency truncation (hard drop, no LLM).
Memory Persistence
Enable automatic memory persistence branches every N user messages.
Number of user messages between automatic memory persistence branches.
Message Coalescing
Enable message coalescing for rapid-fire messages.
Initial debounce window after first message (milliseconds).
Maximum time to wait before flushing regardless (milliseconds).
Minimum messages to trigger coalesce mode (1 = always debounce, 2 = only when burst detected).
Apply only to multi-user conversations (skip for DMs).
Memory Ingestion
Enable file-based memory ingestion from agent workspace
ingest/ directory.How often to scan the ingest directory for new files (seconds).
Target chunk size in characters when splitting ingested files.
Cortex
Interval between cortex observation ticks (seconds).
Timeout for worker processes before cortex kills them (seconds).
Timeout for branch processes before cortex kills them (seconds).
Number of consecutive failures before auto-disabling recurring tasks.
Interval between memory bulletin refreshes (seconds).
Target word count for the memory bulletin.
Maximum LLM turns for bulletin generation.
Interval between memory association passes (seconds).
Minimum cosine similarity to create a RelatedTo edge between memories.
Minimum cosine similarity to create an Updates edge (near-duplicate).
Maximum associations to create per pass (rate limit).
Warmup
Enable background warmup passes.
Force-load the embedding model before first recall/write workloads.
Interval between warmup refresh passes (seconds).
Startup delay before the first warmup pass (seconds).
Browser
Enable browser tools for workers.
Run Chrome in headless mode.
Allow JavaScript evaluation via the browser tool (security risk).
Custom Chrome/Chromium executable path. If not set, uses system Chrome.
Directory for storing screenshots. Defaults to
{data_dir}/screenshots.OpenCode
Enable OpenCode workers for coding tasks.
Path to the OpenCode binary. Supports
env:VAR_NAME references. Defaults to opencode on PATH.Maximum concurrent OpenCode server processes.
Timeout in seconds waiting for a server to become healthy.
Maximum restart attempts before giving up on a server.
Permission mode for OpenCode file edits:
allow, reject, or ask.Permission mode for OpenCode shell commands:
allow, reject, or ask.Permission mode for OpenCode web requests:
allow, reject, or ask.MCP Servers
Unique name for this MCP server.
Transport type:
stdio (subprocess) or http (remote server).Command to run for stdio transport. Example:
npxCommand arguments for stdio transport. Example:
["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]Environment variables for stdio transport. Example:
{ API_KEY = "env:MY_KEY" }URL for http transport. Example:
https://mcp.sentry.ioHTTP headers for http transport. Example:
{ Authorization = "Bearer ${TOKEN}" }Enable or disable this MCP server.
Other Defaults
Brave Search API key for web search tool. Supports
env:VAR_NAME references.Default timezone for cron active hours evaluation. Example:
America/Los_AngelesDefault timezone for channel/worker temporal context. Example:
America/Los_AngelesNumber of messages to fetch from the platform when a new channel is created.
Worker log mode:
errors_only— only write logs on failureall_separate— write separate log file for each workerall_combined— write all workers to a single log file
Agents
Define one or more agents. Each agent has its own workspace, databases, identity files, and messaging bindings.Unique agent identifier. Used in bindings and API requests. Must be lowercase alphanumeric with hyphens.
Mark this agent as the default for unbound conversations.
Human-readable agent name shown in UI.
Agent role description (e.g., “handles tier 1 support”).
Custom workspace path. Defaults to
{instance_dir}/agents/{id}/workspace.defaults can be overridden per agent: routing, max_concurrent_branches, max_concurrent_workers, max_turns, branch_max_turns, context_window, compaction, memory_persistence, coalesce, ingestion, cortex, warmup, browser, mcp, brave_search_key, cron_timezone, user_timezone.
Sandbox configuration for process containment. See Permissions for details.
Cron Jobs
Define scheduled tasks per agent:Unique cron job identifier within this agent.
Prompt sent to the agent when the job fires.
Cron expression (5-field format). Example:
0 9 * * * for 9 AM daily. Takes precedence over interval_secs.Legacy interval in seconds. Used if
cron_expr is not set.Delivery target in
adapter:target format. Example: discord:123456789Optional active hours window
[start_hour, end_hour] in 24h format. Example: [9, 17] for 9 AM to 5 PM.Enable or disable this cron job.
Run once and then disable.
Maximum wall-clock seconds to wait for the job to complete.
Links and Groups
Define visual topology for the agent graph UI:Source node (agent ID or human ID).
Target node (agent ID or human ID).
Link direction:
bidirectional or unidirectional.Link type (e.g.,
collaboration, delegation).Group name shown in topology UI.
Array of agent IDs in this group.
Optional color for the group in hex format. Example:
#FF6B6BHumans
Define org-level humans for the topology graph:Unique human identifier.
Human-readable name.
Role or title.
Short biography or description.
Messaging
Configure messaging platform credentials and adapters.Discord
Enable the Discord adapter.
Discord bot token. Supports
env:VAR_NAME references.User IDs allowed to DM the bot. If empty, DMs are ignored entirely.
Whether to process messages from other bots (self-messages are always ignored).
Additional named Discord bot instances. Each has
name, enabled, token, dm_allowed_users, and allow_bot_messages.Slack
Enable the Slack adapter.
Slack bot token (starts with
xoxb-). Supports env:VAR_NAME references.Slack app token (starts with
xapp-). Supports env:VAR_NAME references.User IDs allowed to DM the bot. If empty, DMs are ignored entirely.
Slash command definitions. Each has
command (e.g., /ask), agent_id, and optional description.Additional named Slack app instances. Each has
name, enabled, bot_token, app_token, dm_allowed_users, and commands.Telegram
Enable the Telegram adapter.
Telegram bot token from BotFather. Supports
env:VAR_NAME references.Additional named Telegram bot instances. Each has
name, enabled, and token.Twitch
Enable the Twitch adapter.
Twitch username for the bot. Supports
env:VAR_NAME references.Twitch OAuth token (starts with
oauth:). Supports env:VAR_NAME references.Prefix that triggers bot responses in chat.
Additional named Twitch bot instances. Each has
name, enabled, username, oauth_token, and trigger_prefix.Enable the email adapter.
IMAP server hostname. Example:
imap.gmail.comIMAP server port.
SMTP server hostname. Example:
smtp.gmail.comSMTP server port.
Email account username. Supports
env:VAR_NAME references.Email account password or app-specific password. Supports
env:VAR_NAME references.Webhook
Enable the webhook receiver.
Port to bind the webhook HTTP server on.
Address to bind the webhook HTTP server on.
Bindings
Bindings route messaging platform conversations to specific agents.Agent ID that handles messages matching this binding.
Messaging platform:
discord, slack, telegram, twitch, email, or webchat.Optional named adapter instance. If not set, uses the default adapter for this platform.
Discord guild (server) ID. Required for Discord guild bindings.
Slack workspace (team) ID. Required for Slack bindings.
Telegram chat ID. Required for Telegram bindings.
Channel IDs this binding applies to. If empty, all channels in the guild/workspace are allowed.
Require explicit @mention (or reply-to-bot) for inbound messages. Discord only.
User IDs allowed to DM the bot through this binding.
API
Enable the HTTP API server.
Port to bind the HTTP API server on.
Address to bind the HTTP API server on.
Optional bearer token for API authentication. Supports
env:VAR_NAME references.Metrics
Enable the Prometheus metrics endpoint.
Port to bind the metrics HTTP server on.
Address to bind the metrics HTTP server on.
Telemetry
OTLP HTTP endpoint for OpenTelemetry traces. Falls back to
OTEL_EXPORTER_OTLP_ENDPOINT env var.Service name resource attribute sent with every span.
Trace sample rate (0.0–1.0). Defaults to 1.0 (sample all).
Environment Variables
All string values inconfig.toml support env:VAR_NAME references:
SPACEBOT_DIR— instance directory (defaults to~/.spacebot)SPACEBOT_CONFIG_PATH— path toconfig.tomlSPACEBOT_CRON_TIMEZONE— default cron timezoneSPACEBOT_USER_TIMEZONE— default user timezoneOTEL_EXPORTER_OTLP_ENDPOINT— OpenTelemetry endpointOTEL_EXPORTER_OTLP_HEADERS— OpenTelemetry headers