OpenFang is configured via a single TOML file located at ~/.openfang/config.toml. This file controls all aspects of the system including models, channels, security, memory, and networking.
Quick Start
Create your configuration file:
cp ~/.openfang/config.toml.example ~/.openfang/config.toml
Or initialize with the setup wizard:
Configuration File Location
Default path : ~/.openfang/config.toml
Custom path : Set via OPENFANG_CONFIG environment variable
Example file : Included in installation at ~/.openfang/config.toml.example
Core Configuration Sections
Models Configure LLM providers, model routing, and fallback chains
Providers Setup API keys and endpoints for 27 LLM providers
Channels Connect to 40+ messaging platforms and configure behavior
Security Enable authentication, rate limiting, and security features
Minimal Configuration
The absolute minimum configuration requires only a default model:
[ default_model ]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
api_key_env = "ANTHROPIC_API_KEY"
Set the API key as an environment variable:
export ANTHROPIC_API_KEY = "sk-ant-..."
openfang start
Full Configuration Example
Complete Config
Minimal Production
Multi-Provider
# API server settings
api_key = "" # Set to enable Bearer auth (recommended)
api_listen = "127.0.0.1:50051" # HTTP API bind address (use 0.0.0.0 for public)
[ default_model ]
provider = "anthropic" # "anthropic", "gemini", "openai", "groq", "ollama", etc.
model = "claude-sonnet-4-20250514" # Model identifier
api_key_env = "ANTHROPIC_API_KEY" # Environment variable holding API key
# base_url = "https://api.anthropic.com" # Optional: override API endpoint
[ memory ]
decay_rate = 0.05 # Memory confidence decay rate
# sqlite_path = "~/.openfang/data/openfang.db" # Optional: custom DB path
[ network ]
listen_addr = "127.0.0.1:4200" # OFP listen address
# shared_secret = "" # Required for P2P authentication
# Session compaction (LLM-based context management)
[ compaction ]
threshold = 80 # Compact when messages exceed this count
keep_recent = 20 # Keep this many recent messages after compaction
max_summary_tokens = 1024 # Max tokens for LLM summary
# Usage tracking display
usage_footer = "Full" # "Off", "Tokens", "Cost", or "Full"
# Channel adapters (configure tokens via environment variables)
[ telegram ]
bot_token_env = "TELEGRAM_BOT_TOKEN"
allowed_users = [] # Empty = allow all
[ discord ]
bot_token_env = "DISCORD_BOT_TOKEN"
guild_ids = [] # Empty = all guilds
[ slack ]
bot_token_env = "SLACK_BOT_TOKEN"
app_token_env = "SLACK_APP_TOKEN"
# MCP server connections
[[ mcp_servers ]]
name = "filesystem"
command = "npx"
args = [ "-y" , "@modelcontextprotocol/server-filesystem" , "/tmp" ]
Environment Variables
OpenFang uses environment variables for sensitive credentials:
API Keys
# LLM Providers
export ANTHROPIC_API_KEY = "sk-ant-..."
export OPENAI_API_KEY = "sk-..."
export GEMINI_API_KEY = "..."
export GROQ_API_KEY = "gsk_..."
# Channel Adapters
export TELEGRAM_BOT_TOKEN = "..."
export DISCORD_BOT_TOKEN = "..."
export SLACK_BOT_TOKEN = "xoxb-..."
export SLACK_APP_TOKEN = "xapp-..."
# Web Services
export BRAVE_API_KEY = "..."
export TAVILY_API_KEY = "..."
Never hardcode API keys in config.toml. Always use environment variables referenced via api_key_env fields.
Hot Reload
OpenFang can automatically reload configuration changes without restarting:
[ reload ]
mode = "hybrid" # "off", "restart", "hot", or "hybrid"
debounce_ms = 500 # Wait time before applying changes
Reload Modes
off : No automatic reloading (default for production)
restart : Full daemon restart on config change
hot : Hot-reload safe sections only (channels, skills, heartbeat)
hybrid : Hot-reload where possible, flag restart-required otherwise
Some settings like api_listen and security settings require a full restart and cannot be hot-reloaded.
Validation
Validate your configuration before starting:
View merged configuration (includes defaults):
Configuration Hierarchy
Configuration is loaded with the following precedence (highest to lowest):
Environment variables (for API keys)
Command-line flags (e.g., --api-listen)
config.toml (user configuration)
Built-in defaults (in code)
Next Steps
Configure Models Set up LLM providers and model routing
Setup Providers Configure all 27 supported LLM providers
Enable Channels Connect messaging platforms
Secure Your Instance Enable authentication and security features