Skip to main content
Common issues, diagnostics, and answers to frequently asked questions about OpenFang.

Quick Diagnostics

Run the built-in diagnostic tool first:
openfang doctor
This checks:
  • Configuration file exists and is valid TOML
  • API keys are set in environment
  • Database is accessible
  • Daemon status (running or not)
  • Port availability
  • Tool dependencies (Python, signal-cli, etc.)

Check Daemon Status

openfang status

Check Health via API

curl http://127.0.0.1:4200/api/health
curl http://127.0.0.1:4200/api/health/detail  # Requires auth

View Logs

OpenFang uses tracing for structured logging. Set the log level via environment:
RUST_LOG=info openfang start          # Default
RUST_LOG=debug openfang start         # Verbose
RUST_LOG=openfang=debug openfang start  # Only OpenFang debug, deps at info

Installation Issues

Cause: Rust toolchain too old or missing system dependencies.Fix:
rustup update stable
rustup default stable
rustc --version  # Need 1.75+
On Linux, you may also need:
# Debian/Ubuntu
sudo apt install pkg-config libssl-dev libsqlite3-dev

# Fedora
sudo dnf install openssl-devel sqlite-devel
Fix: Ensure ~/.cargo/bin is in your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
# Add to ~/.bashrc or ~/.zshrc to persist
Common causes:
  • No API key provided: docker run -e GROQ_API_KEY=... ghcr.io/RightNow-AI/openfang
  • Port already in use: change the port mapping -p 3001:4200
  • Permission denied on volume mount: check directory permissions

Configuration Issues

Fix: Run openfang init to create the default config:
openfang init
This creates ~/.openfang/config.toml with sensible defaults.
Cause: No LLM provider API key found in environment.Fix: Set at least one provider key:
export GROQ_API_KEY="gsk_..."     # Groq (free tier available)
# OR
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
export OPENAI_API_KEY="sk-..."
Add to your shell profile to persist across sessions.
Run validation manually:
openfang config show
Common issues:
  • Malformed TOML syntax (use a TOML validator)
  • Invalid port numbers (must be 1-65535)
  • Missing required fields in channel configs
Fix: Change the port in config or kill the existing process:
# Change API port in config.toml:
# [api]
# listen_addr = "127.0.0.1:3001"

# Or find and kill the process using the port
# Linux/macOS:
lsof -i :4200
# Windows:
netstat -aon | findstr :4200

LLM Provider Issues

Causes:
  • API key not set or incorrect
  • API key expired or revoked
  • Wrong env var name
Fix: Verify your key:
# Check if the env var is set
echo $GROQ_API_KEY

# Test the provider
curl http://127.0.0.1:4200/api/providers/groq/test -X POST
Cause: Too many requests to the LLM provider.Fix:
  • The driver automatically retries with exponential backoff
  • Reduce max_llm_tokens_per_hour in agent capabilities
  • Switch to a provider with higher rate limits
  • Use multiple providers with model routing
Possible causes:
  • Provider API latency (try Groq for fast inference)
  • Large context window (use /compact to shrink session)
  • Complex tool chains (check iteration count in response)
Fix: Use per-agent model overrides to use faster models for simple agents:
[model]
provider = "groq"
model = "llama-3.1-8b-instant"  # Fast, small model
Fix: Check available models:
curl http://127.0.0.1:4200/api/models
Or use an alias:
[model]
model = "llama"  # Alias for llama-3.3-70b-versatile
See the full alias list:
curl http://127.0.0.1:4200/api/models/aliases
Fix: Ensure the local server is running:
# Ollama
ollama serve  # Default: http://localhost:11434

# vLLM
python -m vllm.entrypoints.openai.api_server --model ...

# LM Studio
# Start from the LM Studio UI, enable API server

Channel Issues

Checklist:
  1. Bot token is correct: echo $TELEGRAM_BOT_TOKEN
  2. Bot has been started (send /start in Telegram)
  3. If allowed_users is set, your Telegram user ID is in the list
  4. Check logs for “Telegram adapter” messages
Checklist:
  1. Bot token is correct
  2. Message Content Intent is enabled in Discord Developer Portal
  3. Bot has been invited to the server with correct permissions
  4. Check Gateway connection in logs
Checklist:
  1. Both SLACK_BOT_TOKEN (xoxb-) and SLACK_APP_TOKEN (xapp-) are set
  2. Socket Mode is enabled in the Slack app settings
  3. Bot has been added to the channels it should monitor
  4. Required scopes: chat:write, app_mentions:read, im:history, im:read, im:write
Checklist:
  1. Your server is publicly accessible (or use a tunnel like ngrok)
  2. Webhook URL is correctly configured in the platform dashboard
  3. Webhook port is open and not blocked by firewall
  4. Verify token matches between config and platform dashboard
Common causes:
  • Missing or invalid token
  • Port already in use (for webhook-based channels)
  • Network connectivity issues
Check logs for the specific error:
RUST_LOG=openfang_channels=debug openfang start

Agent Issues

Cause: The agent is repeatedly calling the same tool with the same parameters.Automatic protection: OpenFang has a built-in loop guard:
  • Warn at 3 identical tool calls
  • Block at 5 identical tool calls
  • Circuit breaker at 30 total blocked calls (stops the agent)
Manual fix: Cancel the agent’s current run:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/stop
Or via chat command: /stop
Cause: Conversation history is too long for the model’s context window.Fix: Compact the session:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/session/compact
Or via chat command: /compactAuto-compaction is enabled by default when the session reaches the threshold (configurable in [compaction]).
Cause: Tools not granted in the agent’s capabilities.Fix: Check the agent’s manifest:
[capabilities]
tools = ["file_read", "web_fetch", "shell_exec"]  # Must list each tool
# OR
# tools = ["*"]  # Grant all tools (use with caution)
Cause: The agent is trying to use a tool or access a resource not in its capabilities.Fix: Add the required capability to the agent manifest. Common ones:
  • tools = [...] for tool access
  • network = ["*"] for network access
  • memory_write = ["self.*"] for memory writes
  • shell = ["*"] for shell commands (use with caution)
Check:
  1. TOML manifest is valid: openfang agent spawn --dry-run manifest.toml
  2. LLM provider is configured and has a valid key
  3. Model specified in manifest exists in the catalog

API Issues

Cause: API key required but not provided.Fix: Include the Bearer token:
curl -H "Authorization: Bearer your-api-key" http://127.0.0.1:4200/api/agents
Cause: GCRA rate limiter triggered.Fix: Wait for the Retry-After period, or increase rate limits in config:
[api]
rate_limit_per_second = 20  # Increase if needed
Cause: Trying to access API from a different origin.Fix: Add your origin to CORS config:
[api]
cors_origins = ["http://localhost:5173", "https://your-app.com"]
Possible causes:
  • Idle timeout (send periodic pings)
  • Network interruption (reconnect automatically)
  • Agent crashed (check logs)
Client-side fix: Implement reconnection logic with exponential backoff.
Checklist:
  1. Use POST /v1/chat/completions (not /api/agents/{id}/message)
  2. Set the model to openfang:agent-name (e.g., openfang:coder)
  3. Streaming: set "stream": true for SSE responses
  4. Images: use image_url with data:image/png;base64,... format

Desktop App Issues

Checklist:
  1. Only one instance can run at a time (single-instance enforcement)
  2. Check if the daemon is already running on the same ports
  3. Try deleting ~/.openfang/daemon.json and restarting
Cause: The embedded API server hasn’t started yet.Fix: Wait a few seconds. If persistent, check logs for server startup errors.
Platform-specific:
  • Linux: Requires a system tray (e.g., libappindicator on GNOME)
  • macOS: Should work out of the box
  • Windows: Check notification area settings, may need to show hidden icons

Performance

Tips:
  • Reduce the number of concurrent agents
  • Use session compaction for long-running agents
  • Use smaller models (Llama 8B instead of 70B for simple tasks)
  • Clear old sessions: DELETE /api/sessions/{id}
Normal startup: <200ms for the kernel, ~1-2s with channel adapters.If slower:
  • Check database size (~/.openfang/data/openfang.db)
  • Reduce the number of enabled channels
  • Check network connectivity (MCP server connections happen at boot)
Possible causes:
  • WASM sandbox execution (fuel-limited, should self-terminate)
  • Multiple agents running simultaneously
  • Channel adapters reconnecting (exponential backoff)

FAQ

Edit ~/.openfang/config.toml:
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"
api_key_env = "GROQ_API_KEY"
Yes. Each agent can use a different provider via its manifest [model] section. The kernel creates a dedicated driver per unique provider configuration.
  1. Add the channel config to ~/.openfang/config.toml under [channels]
  2. Set the required environment variables (tokens, secrets)
  3. Restart the daemon
# From source
cd openfang && git pull && cargo install --path crates/openfang-cli

# Docker
docker pull ghcr.io/RightNow-AI/openfang:latest
Yes. Agents can use the agent_send, agent_spawn, agent_find, and agent_list tools to communicate. The orchestrator template is specifically designed for multi-agent delegation.
Only LLM API calls go to the provider’s servers. All agent data, memory, sessions, and configuration are stored locally in SQLite (~/.openfang/data/openfang.db). The OFP wire protocol uses HMAC-SHA256 mutual authentication for P2P communication.
Back up these files:
  • ~/.openfang/config.toml (configuration)
  • ~/.openfang/data/openfang.db (all agent data, memory, sessions)
  • ~/.openfang/skills/ (installed skills)
rm -rf ~/.openfang
openfang init  # Start fresh
Yes, if you use a local LLM provider:
  • Ollama: ollama serve + ollama pull llama3.2
  • vLLM: Self-hosted model server
  • LM Studio: GUI-based local model runner
Set the provider in config:
[default_model]
provider = "ollama"
model = "llama3.2"
AspectOpenFangOpenClaw
LanguageRustPython
Channels4038
Skills6057
Providers203
Security16 systemsConfig-based
Binary size~30 MB~200 MB
Startup<200 ms~3 s
OpenFang can import OpenClaw configs: openfang migrate --from openclaw
  • Bugs: Open an issue on GitHub
  • Security: See SECURITY.md for responsible disclosure
  • Features: Open a GitHub discussion or PR
ResourceMinimumRecommended
RAM128 MB512 MB
Disk50 MB (binary)500 MB (with data)
CPUAny x86_64/ARM642+ cores
OSLinux, macOS, WindowsAny
Rust1.75+ (build only)Latest stable
RUST_LOG=openfang_runtime=debug,openfang_channels=info openfang start
Yes. Each crate is independently usable:
[dependencies]
openfang-runtime = { path = "crates/openfang-runtime" }
openfang-memory = { path = "crates/openfang-memory" }
The openfang-kernel crate assembles everything, but you can use individual crates for custom integrations.
Still having issues? Join the Discord community or open a GitHub issue.