Quick Diagnostics
Run the built-in diagnostic tool first:- Configuration file exists and is valid TOML
- API keys are set in environment
- Database is accessible
- Daemon status (running or not)
- Port availability
- Tool dependencies (Python, signal-cli, etc.)
Check Daemon Status
Check Health via API
View Logs
OpenFang usestracing for structured logging. Set the log level via environment:
Installation Issues
cargo install fails with compilation errors
cargo install fails with compilation errors
openfang command not found after install
openfang command not found after install
~/.cargo/bin is in your PATH:Docker container won't start
Docker container won't start
- No API key provided:
docker run -e GROQ_API_KEY=... ghcr.io/RightNow-AI/openfang - Port already in use: change the port mapping
-p 3001:4200 - Permission denied on volume mount: check directory permissions
Configuration Issues
Config file not found
Config file not found
openfang init to create the default config:~/.openfang/config.toml with sensible defaults.Missing API key warnings on start
Missing API key warnings on start
Config validation errors
Config validation errors
- Malformed TOML syntax (use a TOML validator)
- Invalid port numbers (must be 1-65535)
- Missing required fields in channel configs
Port already in use
Port already in use
LLM Provider Issues
Authentication failed / 401 errors
Authentication failed / 401 errors
- API key not set or incorrect
- API key expired or revoked
- Wrong env var name
Rate limited / 429 errors
Rate limited / 429 errors
- The driver automatically retries with exponential backoff
- Reduce
max_llm_tokens_per_hourin agent capabilities - Switch to a provider with higher rate limits
- Use multiple providers with model routing
Slow responses
Slow responses
- Provider API latency (try Groq for fast inference)
- Large context window (use
/compactto shrink session) - Complex tool chains (check iteration count in response)
Model not found
Model not found
Ollama / local models not connecting
Ollama / local models not connecting
Channel Issues
Telegram bot not responding
Telegram bot not responding
- Bot token is correct:
echo $TELEGRAM_BOT_TOKEN - Bot has been started (send
/startin Telegram) - If
allowed_usersis set, your Telegram user ID is in the list - Check logs for “Telegram adapter” messages
Discord bot offline
Discord bot offline
- Bot token is correct
- Message Content Intent is enabled in Discord Developer Portal
- Bot has been invited to the server with correct permissions
- Check Gateway connection in logs
Slack bot not receiving messages
Slack bot not receiving messages
- Both
SLACK_BOT_TOKEN(xoxb-) andSLACK_APP_TOKEN(xapp-) are set - Socket Mode is enabled in the Slack app settings
- Bot has been added to the channels it should monitor
- Required scopes:
chat:write,app_mentions:read,im:history,im:read,im:write
Webhook-based channels (WhatsApp, LINE, Viber, etc.)
Webhook-based channels (WhatsApp, LINE, Viber, etc.)
- Your server is publicly accessible (or use a tunnel like ngrok)
- Webhook URL is correctly configured in the platform dashboard
- Webhook port is open and not blocked by firewall
- Verify token matches between config and platform dashboard
Channel adapter failed to start
Channel adapter failed to start
- Missing or invalid token
- Port already in use (for webhook-based channels)
- Network connectivity issues
Agent Issues
Agent stuck in a loop
Agent stuck in a loop
- Warn at 3 identical tool calls
- Block at 5 identical tool calls
- Circuit breaker at 30 total blocked calls (stops the agent)
/stopAgent running out of context
Agent running out of context
/compactAuto-compaction is enabled by default when the session reaches the threshold (configurable in [compaction]).Agent not using tools
Agent not using tools
Permission denied errors in agent responses
Permission denied errors in agent responses
tools = [...]for tool accessnetwork = ["*"]for network accessmemory_write = ["self.*"]for memory writesshell = ["*"]for shell commands (use with caution)
Agent spawning fails
Agent spawning fails
- TOML manifest is valid:
openfang agent spawn --dry-run manifest.toml - LLM provider is configured and has a valid key
- Model specified in manifest exists in the catalog
API Issues
401 Unauthorized
401 Unauthorized
429 Too Many Requests
429 Too Many Requests
Retry-After period, or increase rate limits in config:CORS errors from browser
CORS errors from browser
WebSocket disconnects
WebSocket disconnects
- Idle timeout (send periodic pings)
- Network interruption (reconnect automatically)
- Agent crashed (check logs)
OpenAI-compatible API not working with my tool
OpenAI-compatible API not working with my tool
- Use
POST /v1/chat/completions(not/api/agents/{id}/message) - Set the model to
openfang:agent-name(e.g.,openfang:coder) - Streaming: set
"stream": truefor SSE responses - Images: use
image_urlwithdata:image/png;base64,...format
Desktop App Issues
App won't start
App won't start
- Only one instance can run at a time (single-instance enforcement)
- Check if the daemon is already running on the same ports
- Try deleting
~/.openfang/daemon.jsonand restarting
White/blank screen in app
White/blank screen in app
System tray icon missing
System tray icon missing
- Linux: Requires a system tray (e.g.,
libappindicatoron GNOME) - macOS: Should work out of the box
- Windows: Check notification area settings, may need to show hidden icons
Performance
High memory usage
High memory usage
- Reduce the number of concurrent agents
- Use session compaction for long-running agents
- Use smaller models (Llama 8B instead of 70B for simple tasks)
- Clear old sessions:
DELETE /api/sessions/{id}
Slow startup
Slow startup
- Check database size (
~/.openfang/data/openfang.db) - Reduce the number of enabled channels
- Check network connectivity (MCP server connections happen at boot)
High CPU usage
High CPU usage
- WASM sandbox execution (fuel-limited, should self-terminate)
- Multiple agents running simultaneously
- Channel adapters reconnecting (exponential backoff)
FAQ
How do I switch the default LLM provider?
How do I switch the default LLM provider?
~/.openfang/config.toml:Can I use multiple providers at the same time?
Can I use multiple providers at the same time?
[model] section. The kernel creates a dedicated driver per unique provider configuration.How do I add a new channel?
How do I add a new channel?
- Add the channel config to
~/.openfang/config.tomlunder[channels] - Set the required environment variables (tokens, secrets)
- Restart the daemon
How do I update OpenFang?
How do I update OpenFang?
Can agents talk to each other?
Can agents talk to each other?
agent_send, agent_spawn, agent_find, and agent_list tools to communicate. The orchestrator template is specifically designed for multi-agent delegation.Is my data sent to the cloud?
Is my data sent to the cloud?
~/.openfang/data/openfang.db). The OFP wire protocol uses HMAC-SHA256 mutual authentication for P2P communication.How do I back up my data?
How do I back up my data?
~/.openfang/config.toml(configuration)~/.openfang/data/openfang.db(all agent data, memory, sessions)~/.openfang/skills/(installed skills)
How do I reset everything?
How do I reset everything?
Can I run OpenFang without an internet connection?
Can I run OpenFang without an internet connection?
- Ollama:
ollama serve+ollama pull llama3.2 - vLLM: Self-hosted model server
- LM Studio: GUI-based local model runner
What's the difference between OpenFang and OpenClaw?
What's the difference between OpenFang and OpenClaw?
| Aspect | OpenFang | OpenClaw |
|---|---|---|
| Language | Rust | Python |
| Channels | 40 | 38 |
| Skills | 60 | 57 |
| Providers | 20 | 3 |
| Security | 16 systems | Config-based |
| Binary size | ~30 MB | ~200 MB |
| Startup | <200 ms | ~3 s |
openfang migrate --from openclawHow do I report a bug or request a feature?
How do I report a bug or request a feature?
- Bugs: Open an issue on GitHub
- Security: See SECURITY.md for responsible disclosure
- Features: Open a GitHub discussion or PR
What are the system requirements?
What are the system requirements?
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 128 MB | 512 MB |
| Disk | 50 MB (binary) | 500 MB (with data) |
| CPU | Any x86_64/ARM64 | 2+ cores |
| OS | Linux, macOS, Windows | Any |
| Rust | 1.75+ (build only) | Latest stable |
How do I enable debug logging for a specific crate?
How do I enable debug logging for a specific crate?
Can I use OpenFang as a library?
Can I use OpenFang as a library?
openfang-kernel crate assembles everything, but you can use individual crates for custom integrations.