Skip to main content

Configuration Reference

Your config.toml file defines LLM providers, agent settings, messaging platform credentials, and bindings that route conversations to specific agents. Every section supports environment variable references via env:VAR_NAME.

File Location

Spacebot searches for config.toml in:
  1. Path specified by --config flag
  2. Current working directory
  3. ~/.spacebot/config.toml
  4. $SPACEBOT_DIR/config.toml (if SPACEBOT_DIR is set)
You can also use SPACEBOT_CONFIG_PATH environment variable to specify the config path.

LLM Providers

Configure API keys for LLM providers. All keys are optional — configure only the providers you plan to use.
[llm]
anthropic_key = "env:ANTHROPIC_API_KEY"
openai_key = "env:OPENAI_API_KEY"
openrouter_key = "env:OPENROUTER_API_KEY"
kilo_key = "env:KILO_API_KEY"
zhipu_key = "env:ZHIPU_API_KEY"  # Z.ai (GLM)
groq_key = "env:GROQ_API_KEY"
together_key = "env:TOGETHER_API_KEY"
fireworks_key = "env:FIREWORKS_API_KEY"
deepseek_key = "env:DEEPSEEK_API_KEY"
xai_key = "env:XAI_API_KEY"
mistral_key = "env:MISTRAL_API_KEY"
gemini_key = "env:GEMINI_API_KEY"
ollama_base_url = "http://localhost:11434"
opencode_zen_key = "env:OPENCODE_ZEN_KEY"
opencode_go_key = "env:OPENCODE_GO_KEY"
nvidia_key = "env:NVIDIA_API_KEY"
minimax_key = "env:MINIMAX_API_KEY"
minimax_cn_key = "env:MINIMAX_CN_API_KEY"
moonshot_key = "env:MOONSHOT_API_KEY"  # Kimi
zai_coding_plan_key = "env:ZAI_CODING_PLAN_KEY"
anthropic_key
string
Anthropic API key for Claude models. Supports env:VAR_NAME references.
openai_key
string
OpenAI API key for GPT models. Supports env:VAR_NAME references.
openrouter_key
string
OpenRouter API key for multi-provider access. Supports env:VAR_NAME references.
kilo_key
string
Kilo Gateway API key. Supports env:VAR_NAME references.
zhipu_key
string
Z.ai (GLM) API key for GLM models. Supports env:VAR_NAME references.
groq_key
string
Groq API key for fast inference. Supports env:VAR_NAME references.
together_key
string
Together AI API key. Supports env:VAR_NAME references.
fireworks_key
string
Fireworks AI API key. Supports env:VAR_NAME references.
deepseek_key
string
DeepSeek API key. Supports env:VAR_NAME references.
xai_key
string
xAI (Grok) API key. Supports env:VAR_NAME references.
mistral_key
string
Mistral AI API key. Supports env:VAR_NAME references.
gemini_key
string
Google Gemini API key. Supports env:VAR_NAME references.
ollama_key
string
Optional API key for Ollama instances that require authentication. Supports env:VAR_NAME references.
ollama_base_url
string
Base URL for local Ollama instance. Defaults to http://localhost:11434.
opencode_zen_key
string
OpenCode Zen API key. Supports env:VAR_NAME references.
opencode_go_key
string
OpenCode Go API key. Supports env:VAR_NAME references.
nvidia_key
string
NVIDIA API key for their model catalog. Supports env:VAR_NAME references.
minimax_key
string
MiniMax API key (international endpoint). Supports env:VAR_NAME references.
minimax_cn_key
string
MiniMax API key (China endpoint). Supports env:VAR_NAME references.
moonshot_key
string
Moonshot AI (Kimi) API key. Supports env:VAR_NAME references.
zai_coding_plan_key
string
Z.ai Coding Plan API key for specialized coding models. Supports env:VAR_NAME references.

Custom Providers

Add any OpenAI-compatible or Anthropic-compatible endpoint:
[llm.provider.my-provider]
api_type = "openai_completions"  # or "anthropic", "gemini", "kilo_gateway"
base_url = "https://my-llm-host.example.com"
api_key = "env:MY_PROVIDER_KEY"
name = "My Custom Provider"  # optional display name
llm.provider.<name>.api_type
string
required
API compatibility mode. Options:
  • openai_completions — OpenAI /v1/chat/completions API
  • openai_chat_completions — OpenAI-compatible /chat/completions (no /v1/ prefix)
  • kilo_gateway — Kilo Gateway API with required headers
  • openai_responses — OpenAI /v1/responses API
  • anthropic — Anthropic Messages API
  • gemini — Google Gemini API
llm.provider.<name>.base_url
string
required
Base URL for the API endpoint (without trailing /v1/chat/completions).
llm.provider.<name>.api_key
string
required
API key for authentication. Supports env:VAR_NAME references.
llm.provider.<name>.name
string
Optional display name for the provider.

Defaults

Defaults are inherited by all agents unless overridden in the agent-specific configuration.

Routing

Model routing determines which LLM model handles each process type and task type.
[defaults.routing]
channel = "anthropic/claude-sonnet-4"
worker = "anthropic/claude-haiku-4.5"
branch = "anthropic/claude-sonnet-4"
compactor = "anthropic/claude-haiku-4.5"
cortex = "anthropic/claude-sonnet-4"

[defaults.routing.task_overrides]
coding = "anthropic/claude-sonnet-4"
summarization = "anthropic/claude-haiku-4.5"
memory_recall = "anthropic/claude-haiku-4.5"

[defaults.routing.prompt_routing]
enabled = true
process_types = ["channel", "branch"]

[defaults.routing.fallbacks]
"anthropic/claude-sonnet-4" = ["anthropic/claude-haiku-4.5"]
defaults.routing.channel
string
Default model for channel processes (user-facing conversations). Example: anthropic/claude-sonnet-4
defaults.routing.worker
string
Default model for worker processes (task execution). Example: anthropic/claude-haiku-4.5
defaults.routing.branch
string
Default model for branch processes (thinking/memory recall). Example: anthropic/claude-sonnet-4
defaults.routing.compactor
string
Default model for compaction workers (context summarization). Example: anthropic/claude-haiku-4.5
defaults.routing.cortex
string
Default model for cortex processes (memory bulletin, system observation). Example: anthropic/claude-sonnet-4
defaults.routing.task_overrides
object
Map task types to specific models. Supported task types:
  • coding — code writing and refactoring
  • summarization — context compaction
  • memory_recall — memory search and curation
  • memory_save — memory extraction and storage
  • browser — web browsing tasks
defaults.routing.prompt_routing.enabled
boolean
default:"false"
Enable prompt complexity scoring to downgrade simple requests to cheaper models.
defaults.routing.prompt_routing.process_types
array
Process types that use prompt complexity scoring. Example: ["channel", "branch"]
defaults.routing.fallbacks
object
Fallback chains for model failures (429, 502). Map primary model to array of fallback models.

Concurrency

[defaults]
max_concurrent_branches = 5
max_concurrent_workers = 5
max_turns = 5
branch_max_turns = 50
context_window = 128000
defaults.max_concurrent_branches
integer
default:"5"
Maximum number of branches that can run simultaneously per channel.
defaults.max_concurrent_workers
integer
default:"5"
Maximum number of workers that can run simultaneously per agent.
defaults.max_turns
integer
default:"5"
Maximum LLM turns for channel processes before requiring user input.
defaults.branch_max_turns
integer
default:"50"
Maximum LLM turns for branch and worker processes.
defaults.context_window
integer
default:"128000"
Context window size in tokens. Used for compaction threshold calculations.

Compaction

[defaults.compaction]
background_threshold = 0.80
aggressive_threshold = 0.85
emergency_threshold = 0.95
defaults.compaction.background_threshold
float
default:"0.80"
Context utilization percentage that triggers background compaction (summarize oldest 30%).
defaults.compaction.aggressive_threshold
float
default:"0.85"
Context utilization percentage that triggers aggressive compaction (summarize oldest 50%).
defaults.compaction.emergency_threshold
float
default:"0.95"
Context utilization percentage that triggers emergency truncation (hard drop, no LLM).

Memory Persistence

[defaults.memory_persistence]
enabled = true
message_interval = 50
defaults.memory_persistence.enabled
boolean
default:"true"
Enable automatic memory persistence branches every N user messages.
defaults.memory_persistence.message_interval
integer
default:"50"
Number of user messages between automatic memory persistence branches.

Message Coalescing

[defaults.coalesce]
enabled = true
debounce_ms = 1500
max_wait_ms = 5000
min_messages = 2
multi_user_only = true
defaults.coalesce.enabled
boolean
default:"true"
Enable message coalescing for rapid-fire messages.
defaults.coalesce.debounce_ms
integer
default:"1500"
Initial debounce window after first message (milliseconds).
defaults.coalesce.max_wait_ms
integer
default:"5000"
Maximum time to wait before flushing regardless (milliseconds).
defaults.coalesce.min_messages
integer
default:"2"
Minimum messages to trigger coalesce mode (1 = always debounce, 2 = only when burst detected).
defaults.coalesce.multi_user_only
boolean
default:"true"
Apply only to multi-user conversations (skip for DMs).

Memory Ingestion

[defaults.ingestion]
enabled = true
poll_interval_secs = 30
chunk_size = 4000
defaults.ingestion.enabled
boolean
default:"true"
Enable file-based memory ingestion from agent workspace ingest/ directory.
defaults.ingestion.poll_interval_secs
integer
default:"30"
How often to scan the ingest directory for new files (seconds).
defaults.ingestion.chunk_size
integer
default:"4000"
Target chunk size in characters when splitting ingested files.

Cortex

[defaults.cortex]
tick_interval_secs = 30
worker_timeout_secs = 300
branch_timeout_secs = 60
circuit_breaker_threshold = 3
bulletin_interval_secs = 3600
bulletin_max_words = 1500
bulletin_max_turns = 15
association_interval_secs = 300
association_similarity_threshold = 0.85
association_updates_threshold = 0.95
association_max_per_pass = 100
defaults.cortex.tick_interval_secs
integer
default:"30"
Interval between cortex observation ticks (seconds).
defaults.cortex.worker_timeout_secs
integer
default:"300"
Timeout for worker processes before cortex kills them (seconds).
defaults.cortex.branch_timeout_secs
integer
default:"60"
Timeout for branch processes before cortex kills them (seconds).
defaults.cortex.circuit_breaker_threshold
integer
default:"3"
Number of consecutive failures before auto-disabling recurring tasks.
defaults.cortex.bulletin_interval_secs
integer
default:"3600"
Interval between memory bulletin refreshes (seconds).
defaults.cortex.bulletin_max_words
integer
default:"1500"
Target word count for the memory bulletin.
defaults.cortex.bulletin_max_turns
integer
default:"15"
Maximum LLM turns for bulletin generation.
defaults.cortex.association_interval_secs
integer
default:"300"
Interval between memory association passes (seconds).
defaults.cortex.association_similarity_threshold
float
default:"0.85"
Minimum cosine similarity to create a RelatedTo edge between memories.
defaults.cortex.association_updates_threshold
float
default:"0.95"
Minimum cosine similarity to create an Updates edge (near-duplicate).
defaults.cortex.association_max_per_pass
integer
default:"100"
Maximum associations to create per pass (rate limit).

Warmup

[defaults.warmup]
enabled = true
eager_embedding_load = true
refresh_secs = 900
startup_delay_secs = 5
defaults.warmup.enabled
boolean
default:"true"
Enable background warmup passes.
defaults.warmup.eager_embedding_load
boolean
default:"true"
Force-load the embedding model before first recall/write workloads.
defaults.warmup.refresh_secs
integer
default:"900"
Interval between warmup refresh passes (seconds).
defaults.warmup.startup_delay_secs
integer
default:"5"
Startup delay before the first warmup pass (seconds).

Browser

[defaults.browser]
enabled = true
headless = true
evaluate_enabled = false
executable_path = "/path/to/chrome"  # optional
screenshot_dir = "/path/to/screenshots"  # optional
defaults.browser.enabled
boolean
default:"true"
Enable browser tools for workers.
defaults.browser.headless
boolean
default:"true"
Run Chrome in headless mode.
defaults.browser.evaluate_enabled
boolean
default:"false"
Allow JavaScript evaluation via the browser tool (security risk).
defaults.browser.executable_path
string
Custom Chrome/Chromium executable path. If not set, uses system Chrome.
defaults.browser.screenshot_dir
string
Directory for storing screenshots. Defaults to {data_dir}/screenshots.

OpenCode

[defaults.opencode]
enabled = false
path = "opencode"  # or "env:OPENCODE_PATH"
max_servers = 5
server_startup_timeout_secs = 30
max_restart_retries = 5

[defaults.opencode.permissions]
edit = "allow"  # or "reject", "ask"
bash = "allow"
webfetch = "allow"
defaults.opencode.enabled
boolean
default:"false"
Enable OpenCode workers for coding tasks.
defaults.opencode.path
string
default:"opencode"
Path to the OpenCode binary. Supports env:VAR_NAME references. Defaults to opencode on PATH.
defaults.opencode.max_servers
integer
default:"5"
Maximum concurrent OpenCode server processes.
defaults.opencode.server_startup_timeout_secs
integer
default:"30"
Timeout in seconds waiting for a server to become healthy.
defaults.opencode.max_restart_retries
integer
default:"5"
Maximum restart attempts before giving up on a server.
defaults.opencode.permissions.edit
string
default:"allow"
Permission mode for OpenCode file edits: allow, reject, or ask.
defaults.opencode.permissions.bash
string
default:"allow"
Permission mode for OpenCode shell commands: allow, reject, or ask.
defaults.opencode.permissions.webfetch
string
default:"allow"
Permission mode for OpenCode web requests: allow, reject, or ask.

MCP Servers

[[defaults.mcp]]
name = "filesystem"
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
enabled = true

[[defaults.mcp]]
name = "sentry"
transport = "http"
url = "https://mcp.sentry.io"
headers = { Authorization = "Bearer ${SENTRY_TOKEN}" }
enabled = true
defaults.mcp[].name
string
required
Unique name for this MCP server.
defaults.mcp[].transport
string
required
Transport type: stdio (subprocess) or http (remote server).
defaults.mcp[].command
string
Command to run for stdio transport. Example: npx
defaults.mcp[].args
array
Command arguments for stdio transport. Example: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
defaults.mcp[].env
object
Environment variables for stdio transport. Example: { API_KEY = "env:MY_KEY" }
defaults.mcp[].url
string
URL for http transport. Example: https://mcp.sentry.io
defaults.mcp[].headers
object
HTTP headers for http transport. Example: { Authorization = "Bearer ${TOKEN}" }
defaults.mcp[].enabled
boolean
default:"true"
Enable or disable this MCP server.

Other Defaults

[defaults]
brave_search_key = "env:BRAVE_SEARCH_API_KEY"
cron_timezone = "America/Los_Angeles"
user_timezone = "America/Los_Angeles"
history_backfill_count = 50
worker_log_mode = "errors_only"  # or "all_separate", "all_combined"
defaults.brave_search_key
string
Brave Search API key for web search tool. Supports env:VAR_NAME references.
defaults.cron_timezone
string
Default timezone for cron active hours evaluation. Example: America/Los_Angeles
defaults.user_timezone
string
Default timezone for channel/worker temporal context. Example: America/Los_Angeles
defaults.history_backfill_count
integer
default:"50"
Number of messages to fetch from the platform when a new channel is created.
defaults.worker_log_mode
string
default:"errors_only"
Worker log mode:
  • errors_only — only write logs on failure
  • all_separate — write separate log file for each worker
  • all_combined — write all workers to a single log file

Agents

Define one or more agents. Each agent has its own workspace, databases, identity files, and messaging bindings.
[[agents]]
id = "my-agent"
default = false
display_name = "My Agent"
role = "handles tier 1 support"
workspace = "/path/to/workspace"  # optional

# Agent-specific overrides (all optional)
max_concurrent_branches = 10
max_concurrent_workers = 10
brave_search_key = "env:AGENT_BRAVE_KEY"

[agents.routing]
channel = "anthropic/claude-opus-4"

[agents.sandbox]
mode = "enabled"  # or "disabled"
writable_paths = ["/home/user/projects/myapp"]
agents[].id
string
required
Unique agent identifier. Used in bindings and API requests. Must be lowercase alphanumeric with hyphens.
agents[].default
boolean
default:"false"
Mark this agent as the default for unbound conversations.
agents[].display_name
string
Human-readable agent name shown in UI.
agents[].role
string
Agent role description (e.g., “handles tier 1 support”).
agents[].workspace
string
Custom workspace path. Defaults to {instance_dir}/agents/{id}/workspace.
All agent fields from defaults can be overridden per agent: routing, max_concurrent_branches, max_concurrent_workers, max_turns, branch_max_turns, context_window, compaction, memory_persistence, coalesce, ingestion, cortex, warmup, browser, mcp, brave_search_key, cron_timezone, user_timezone.
agents[].sandbox
object
Sandbox configuration for process containment. See Permissions for details.

Cron Jobs

Define scheduled tasks per agent:
[[agents.cron]]
id = "daily-summary"
prompt = "Summarize today's activity"
cron_expr = "0 17 * * *"  # 5 PM daily
delivery_target = "discord:123456789"
active_hours = [9, 17]  # 9 AM to 5 PM
enabled = true
timeout_secs = 120
agents[].cron[].id
string
required
Unique cron job identifier within this agent.
agents[].cron[].prompt
string
required
Prompt sent to the agent when the job fires.
agents[].cron[].cron_expr
string
Cron expression (5-field format). Example: 0 9 * * * for 9 AM daily. Takes precedence over interval_secs.
agents[].cron[].interval_secs
integer
Legacy interval in seconds. Used if cron_expr is not set.
agents[].cron[].delivery_target
string
required
Delivery target in adapter:target format. Example: discord:123456789
agents[].cron[].active_hours
array
Optional active hours window [start_hour, end_hour] in 24h format. Example: [9, 17] for 9 AM to 5 PM.
agents[].cron[].enabled
boolean
default:"true"
Enable or disable this cron job.
agents[].cron[].run_once
boolean
default:"false"
Run once and then disable.
agents[].cron[].timeout_secs
integer
default:"120"
Maximum wall-clock seconds to wait for the job to complete.
Define visual topology for the agent graph UI:
[[links]]
from = "agent-a"
to = "agent-b"
direction = "bidirectional"  # or "unidirectional"
kind = "collaboration"

[[groups]]
name = "Support Team"
agent_ids = ["agent-a", "agent-b"]
color = "#FF6B6B"
Source node (agent ID or human ID).
Target node (agent ID or human ID).
Link direction: bidirectional or unidirectional.
Link type (e.g., collaboration, delegation).
groups[].name
string
required
Group name shown in topology UI.
groups[].agent_ids
array
required
Array of agent IDs in this group.
groups[].color
string
Optional color for the group in hex format. Example: #FF6B6B

Humans

Define org-level humans for the topology graph:
[[humans]]
id = "alice"
display_name = "Alice Johnson"
role = "Engineering Lead"
bio = "Leads the platform team"
humans[].id
string
required
Unique human identifier.
humans[].display_name
string
Human-readable name.
humans[].role
string
Role or title.
humans[].bio
string
Short biography or description.

Messaging

Configure messaging platform credentials and adapters.

Discord

[messaging.discord]
enabled = true
token = "env:DISCORD_BOT_TOKEN"
dm_allowed_users = ["123456789"]
allow_bot_messages = false

# Named instances for multiple bots
[[messaging.discord.instances]]
name = "ops"
enabled = true
token = "env:DISCORD_OPS_BOT_TOKEN"
dm_allowed_users = ["987654321"]
allow_bot_messages = false
messaging.discord.enabled
boolean
default:"true"
Enable the Discord adapter.
messaging.discord.token
string
required
Discord bot token. Supports env:VAR_NAME references.
messaging.discord.dm_allowed_users
array
User IDs allowed to DM the bot. If empty, DMs are ignored entirely.
messaging.discord.allow_bot_messages
boolean
default:"false"
Whether to process messages from other bots (self-messages are always ignored).
messaging.discord.instances
array
Additional named Discord bot instances. Each has name, enabled, token, dm_allowed_users, and allow_bot_messages.

Slack

[messaging.slack]
enabled = true
bot_token = "env:SLACK_BOT_TOKEN"
app_token = "env:SLACK_APP_TOKEN"
dm_allowed_users = ["U12345678"]

[[messaging.slack.commands]]
command = "/ask"
agent_id = "my-agent"
description = "Ask the agent a question"

# Named instances for multiple workspaces
[[messaging.slack.instances]]
name = "customer-workspace"
enabled = true
bot_token = "env:SLACK_CUSTOMER_BOT_TOKEN"
app_token = "env:SLACK_CUSTOMER_APP_TOKEN"
dm_allowed_users = ["U87654321"]
messaging.slack.enabled
boolean
default:"true"
Enable the Slack adapter.
messaging.slack.bot_token
string
required
Slack bot token (starts with xoxb-). Supports env:VAR_NAME references.
messaging.slack.app_token
string
required
Slack app token (starts with xapp-). Supports env:VAR_NAME references.
messaging.slack.dm_allowed_users
array
User IDs allowed to DM the bot. If empty, DMs are ignored entirely.
messaging.slack.commands
array
Slash command definitions. Each has command (e.g., /ask), agent_id, and optional description.
messaging.slack.instances
array
Additional named Slack app instances. Each has name, enabled, bot_token, app_token, dm_allowed_users, and commands.

Telegram

[messaging.telegram]
enabled = true
token = "env:TELEGRAM_BOT_TOKEN"

# Named instances
[[messaging.telegram.instances]]
name = "support"
enabled = true
token = "env:TELEGRAM_SUPPORT_BOT_TOKEN"
messaging.telegram.enabled
boolean
default:"true"
Enable the Telegram adapter.
messaging.telegram.token
string
required
Telegram bot token from BotFather. Supports env:VAR_NAME references.
messaging.telegram.instances
array
Additional named Telegram bot instances. Each has name, enabled, and token.

Twitch

[messaging.twitch]
enabled = true
username = "env:TWITCH_USERNAME"
oauth_token = "env:TWITCH_OAUTH_TOKEN"
trigger_prefix = "!"

# Named instances
[[messaging.twitch.instances]]
name = "gaming"
enabled = true
username = "env:TWITCH_GAMING_USERNAME"
oauth_token = "env:TWITCH_GAMING_OAUTH_TOKEN"
trigger_prefix = "@"
messaging.twitch.enabled
boolean
default:"true"
Enable the Twitch adapter.
messaging.twitch.username
string
required
Twitch username for the bot. Supports env:VAR_NAME references.
messaging.twitch.oauth_token
string
required
Twitch OAuth token (starts with oauth:). Supports env:VAR_NAME references.
messaging.twitch.trigger_prefix
string
default:"!"
Prefix that triggers bot responses in chat.
messaging.twitch.instances
array
Additional named Twitch bot instances. Each has name, enabled, username, oauth_token, and trigger_prefix.

Email

[messaging.email]
enabled = true
imap_server = "imap.gmail.com"
imap_port = 993
smtp_server = "smtp.gmail.com"
smtp_port = 587
username = "env:EMAIL_USERNAME"
password = "env:EMAIL_PASSWORD"
messaging.email.enabled
boolean
default:"true"
Enable the email adapter.
messaging.email.imap_server
string
required
IMAP server hostname. Example: imap.gmail.com
messaging.email.imap_port
integer
default:"993"
IMAP server port.
messaging.email.smtp_server
string
required
SMTP server hostname. Example: smtp.gmail.com
messaging.email.smtp_port
integer
default:"587"
SMTP server port.
messaging.email.username
string
required
Email account username. Supports env:VAR_NAME references.
messaging.email.password
string
required
Email account password or app-specific password. Supports env:VAR_NAME references.

Webhook

[messaging.webhook]
enabled = true
port = 19899
bind = "127.0.0.1"
messaging.webhook.enabled
boolean
default:"true"
Enable the webhook receiver.
messaging.webhook.port
integer
default:"19899"
Port to bind the webhook HTTP server on.
messaging.webhook.bind
string
default:"127.0.0.1"
Address to bind the webhook HTTP server on.

Bindings

Bindings route messaging platform conversations to specific agents.
[[bindings]]
agent_id = "my-agent"
channel = "discord"
adapter = "ops"  # optional, targets named instance
guild_id = "123456789"
channel_ids = ["987654321", "111222333"]  # optional
require_mention = true
dm_allowed_users = ["444555666"]  # optional

[[bindings]]
agent_id = "support-agent"
channel = "slack"
workspace_id = "T12345678"
channel_ids = ["C87654321"]

[[bindings]]
agent_id = "telegram-agent"
channel = "telegram"
chat_id = "-1001234567890"
bindings[].agent_id
string
required
Agent ID that handles messages matching this binding.
bindings[].channel
string
required
Messaging platform: discord, slack, telegram, twitch, email, or webchat.
bindings[].adapter
string
Optional named adapter instance. If not set, uses the default adapter for this platform.
bindings[].guild_id
string
Discord guild (server) ID. Required for Discord guild bindings.
bindings[].workspace_id
string
Slack workspace (team) ID. Required for Slack bindings.
bindings[].chat_id
string
Telegram chat ID. Required for Telegram bindings.
bindings[].channel_ids
array
Channel IDs this binding applies to. If empty, all channels in the guild/workspace are allowed.
bindings[].require_mention
boolean
default:"false"
Require explicit @mention (or reply-to-bot) for inbound messages. Discord only.
bindings[].dm_allowed_users
array
User IDs allowed to DM the bot through this binding.

API

[api]
enabled = true
port = 19898
bind = "127.0.0.1"
auth_token = "env:SPACEBOT_API_TOKEN"
api.enabled
boolean
default:"true"
Enable the HTTP API server.
api.port
integer
default:"19898"
Port to bind the HTTP API server on.
api.bind
string
default:"127.0.0.1"
Address to bind the HTTP API server on.
api.auth_token
string
Optional bearer token for API authentication. Supports env:VAR_NAME references.

Metrics

[metrics]
enabled = false
port = 9090
bind = "0.0.0.0"
metrics.enabled
boolean
default:"false"
Enable the Prometheus metrics endpoint.
metrics.port
integer
default:"9090"
Port to bind the metrics HTTP server on.
metrics.bind
string
default:"0.0.0.0"
Address to bind the metrics HTTP server on.

Telemetry

[telemetry]
otlp_endpoint = "http://localhost:4318"
service_name = "spacebot"
sample_rate = 1.0
telemetry.otlp_endpoint
string
OTLP HTTP endpoint for OpenTelemetry traces. Falls back to OTEL_EXPORTER_OTLP_ENDPOINT env var.
telemetry.service_name
string
default:"spacebot"
Service name resource attribute sent with every span.
telemetry.sample_rate
float
default:"1.0"
Trace sample rate (0.0–1.0). Defaults to 1.0 (sample all).

Environment Variables

All string values in config.toml support env:VAR_NAME references:
[llm]
anthropic_key = "env:ANTHROPIC_API_KEY"

[messaging.discord]
token = "env:DISCORD_BOT_TOKEN"
You can also set environment variables directly:
  • SPACEBOT_DIR — instance directory (defaults to ~/.spacebot)
  • SPACEBOT_CONFIG_PATH — path to config.toml
  • SPACEBOT_CRON_TIMEZONE — default cron timezone
  • SPACEBOT_USER_TIMEZONE — default user timezone
  • OTEL_EXPORTER_OTLP_ENDPOINT — OpenTelemetry endpoint
  • OTEL_EXPORTER_OTLP_HEADERS — OpenTelemetry headers

Example Configuration

[llm]
anthropic_key = "env:ANTHROPIC_API_KEY"
openai_key = "env:OPENAI_API_KEY"

[defaults.routing]
channel = "anthropic/claude-sonnet-4"
worker = "anthropic/claude-haiku-4.5"

[defaults.routing.task_overrides]
coding = "anthropic/claude-sonnet-4"

[[agents]]
id = "my-agent"

[messaging.discord]
token = "env:DISCORD_BOT_TOKEN"

[[bindings]]
agent_id = "my-agent"
channel = "discord"
guild_id = "123456789"

Build docs developers (and LLMs) love