Skip to main content
Thank you for your interest in contributing to OpenFang. This guide covers everything you need to get started, from setting up your development environment to submitting pull requests.

Development Environment

Prerequisites

Rust 1.75+

Install via rustup

Git

Version control for source code

Python 3.8+

Optional, for Python runtime and skills

LLM API Key

For end-to-end testing (Groq, OpenAI, Anthropic)

Clone and Build

git clone https://github.com/RightNow-AI/openfang.git
cd openfang
cargo build
The first build takes a few minutes because it compiles SQLite (bundled) and Wasmtime. Subsequent builds are incremental.

Environment Variables

For running integration tests that hit a real LLM, set at least one provider key:
export GROQ_API_KEY=gsk_...          # Recommended for fast, free-tier testing
export ANTHROPIC_API_KEY=sk-ant-...  # For Anthropic-specific tests
Tests that require a real LLM key will skip gracefully if the env var is absent.

Building and Testing

Build the Entire Workspace

cargo build --workspace

Run All Tests

cargo test --workspace
The test suite is currently 1,744+ tests. All must pass before merging.

Run Tests for a Single Crate

cargo test -p openfang-kernel
cargo test -p openfang-runtime
cargo test -p openfang-memory

Check for Clippy Warnings

cargo clippy --workspace --all-targets -- -D warnings
The CI pipeline enforces zero clippy warnings.

Format Code

cargo fmt --all
Always run cargo fmt before committing. CI will reject unformatted code.

Run the Doctor Check

After building, verify your local setup:
cargo run -- doctor

Code Style

Formatting

  • Use rustfmt with default settings
  • Run cargo fmt --all before every commit

Linting

  • cargo clippy --workspace -- -D warnings must pass with zero warnings

Documentation

  • All public types and functions must have doc comments (///)

Error Handling

  • Use thiserror for error types
  • Avoid unwrap() in library code; prefer ? propagation

Naming Conventions

TypeConventionExample
TypesPascalCaseOpenFangKernel, AgentManifest
Functions/methodssnake_casespawn_agent, get_status
ConstantsSCREAMING_SNAKE_CASEMAX_RETRIES, DEFAULT_PORT
Crate nameskebab-caseopenfang-kernel, openfang-api

Dependencies

Workspace dependencies are declared in the root Cargo.toml. Prefer reusing workspace deps over adding new ones. If you need a new dependency, justify it in the PR.

Testing

  • Every new feature must include tests
  • Use tempfile::TempDir for filesystem isolation
  • Use random port binding for network tests

Serde

  • All config structs use #[serde(default)] for forward compatibility with partial TOML

Architecture Overview

OpenFang is organized as a Cargo workspace with 14 crates:
Shared type definitions, taint tracking, manifest signing (Ed25519), model catalog, MCP/A2A config types
SQLite-backed memory substrate with vector embeddings, usage tracking, canonical sessions, JSONL mirroring
Agent loop, 3 LLM drivers (Anthropic/Gemini/OpenAI-compat), 38 built-in tools, WASM sandbox, MCP client/server, A2A protocol
Hands system (curated autonomous capability packages), 7 bundled hands
Integration registry (25 bundled MCP templates), AES-256-GCM credential vault, OAuth2 PKCE
Assembles all subsystems: workflow engine, RBAC auth, heartbeat monitor, cron scheduler, config hot-reload
REST/WS/SSE API (Axum 0.8), 76 endpoints, 14-page SPA dashboard, OpenAI-compatible /v1/chat/completions
40 channel adapters (Telegram, Discord, Slack, WhatsApp, and 36 more), formatter, rate limiter
OFP (OpenFang Protocol): TCP P2P networking with HMAC-SHA256 mutual authentication
Clap CLI with daemon auto-detect (HTTP mode vs. in-process fallback), MCP server
Migration engine for importing from OpenClaw (and future frameworks)
Skill system: 60 bundled skills, FangHub marketplace, OpenClaw compatibility, prompt injection scanning
Tauri 2.0 native desktop app (WebView + system tray + single-instance + notifications)
Build automation tasks

Key Architectural Patterns

KernelHandle trait: Defined in openfang-runtime, implemented on OpenFangKernel in openfang-kernel. This avoids circular crate dependencies while enabling inter-agent tools.
Shared memory: A fixed UUID (AgentId(Uuid::from_bytes([0..0, 0x01]))) provides a cross-agent KV namespace.
Daemon detection: The CLI checks ~/.openfang/daemon.json and pings the health endpoint. If a daemon is running, commands use HTTP; otherwise, they boot an in-process kernel.
Capability-based security: Every agent operation is checked against the agent’s granted capabilities before execution.

How to Add a New Agent Template

Agent templates live in the agents/ directory. Each template is a folder containing an agent.toml manifest.

Steps

  1. Create a new directory under agents/:
agents/my-agent/agent.toml
  1. Write the manifest:
name = "my-agent"
version = "0.1.0"
description = "A brief description of what this agent does."
author = "openfang"
module = "builtin:chat"
tags = ["category"]

[model]
provider = "groq"
model = "llama-3.3-70b-versatile"

[resources]
max_llm_tokens_per_hour = 100000

[capabilities]
tools = ["file_read", "file_list", "web_fetch"]
memory_read = ["*"]
memory_write = ["self.*"]
agent_spawn = false
  1. Include a system prompt if needed by adding it to the [model] section:
[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
system_prompt = """
You are a specialized agent that...
"""
  1. Test by spawning:
openfang agent spawn agents/my-agent/agent.toml
  1. Submit a PR with the new template.

How to Add a New Channel Adapter

Channel adapters live in crates/openfang-channels/src/. Each adapter implements the ChannelAdapter trait.

Steps

  1. Create a new file: crates/openfang-channels/src/myplatform.rs
  2. Implement the ChannelAdapter trait:
use crate::types::{ChannelAdapter, ChannelMessage, ChannelType};
use async_trait::async_trait;

pub struct MyPlatformAdapter {
    // token, client, config fields
}

#[async_trait]
impl ChannelAdapter for MyPlatformAdapter {
    fn channel_type(&self) -> ChannelType {
        ChannelType::Custom("myplatform".to_string())
    }

    async fn start(&mut self) -> Result<(), Box<dyn std::error::Error>> {
        // Start polling/listening for messages
        Ok(())
    }

    async fn send(&self, channel_id: &str, content: &str) -> Result<(), Box<dyn std::error::Error>> {
        // Send a message back to the platform
        Ok(())
    }

    async fn stop(&mut self) {
        // Clean shutdown
    }
}
  1. Register the module in crates/openfang-channels/src/lib.rs:
pub mod myplatform;
  1. Wire it up in the channel bridge so the daemon starts it alongside other adapters.
  2. Add configuration support in openfang-types config structs.
  3. Add CLI setup wizard instructions.
  4. Write tests and submit a PR.

How to Add a New Tool

Built-in tools are defined in crates/openfang-runtime/src/tool_runner.rs.

Steps

  1. Add the tool implementation function:
async fn tool_my_tool(input: &serde_json::Value) -> Result<String, String> {
    let param = input["param"]
        .as_str()
        .ok_or("Missing 'param' field")?;

    // Tool logic here
    Ok(format!("Result: {param}"))
}
  1. Register it in the execute_tool match block:
"my_tool" => tool_my_tool(input).await,
  1. Add the tool definition to builtin_tool_definitions():
ToolDefinition {
    name: "my_tool".to_string(),
    description: "Description shown to the LLM.".to_string(),
    input_schema: serde_json::json!({
        "type": "object",
        "properties": {
            "param": {
                "type": "string",
                "description": "The parameter description"
            }
        },
        "required": ["param"]
    }),
},
  1. Agents that need the tool must list it in their manifest:
[capabilities]
tools = ["my_tool"]
  1. Write tests for the tool function.
  2. If the tool requires kernel access (e.g., inter-agent communication), accept Option<&Arc<dyn KernelHandle>> and handle the None case gracefully.

Pull Request Process

1. Fork and Branch

Create a feature branch from main. Use descriptive names:
  • feat/add-matrix-adapter
  • fix/session-restore-crash
  • docs/improve-installation-guide

2. Make Your Changes

Follow the code style guidelines above.

3. Test Thoroughly

All three checks must pass:
cargo test --workspace                                      # All 1,744+ tests
cargo clippy --workspace --all-targets -- -D warnings      # Zero warnings
cargo fmt --all --check                                     # No diff

4. Write a Clear PR Description

Explain what changed and why. Include before/after examples if applicable.

5. One Concern per PR

Keep PRs focused. A single PR should address one feature, one bug fix, or one refactor - not all three.

6. Review Process

At least one maintainer must approve before merge. Address review feedback promptly.

7. CI Must Pass

All automated checks must be green before merge.

Commit Messages

Use clear, imperative-mood messages:
Add Matrix channel adapter with E2EE support
Fix session restore crash on kernel reboot
Refactor capability manager to use DashMap

Code of Conduct

This project follows the Contributor Covenant Code of Conduct. By participating, you agree to uphold a welcoming, inclusive, and harassment-free environment for everyone. Please report unacceptable behavior to the maintainers.

Questions?

GitHub Discussions

Ask questions and share ideas

GitHub Issues

Report bugs or request features

Discord

Chat with the community

Documentation

Read detailed guides
Thank you for contributing to OpenFang. Every contribution, no matter how small, helps make the project better for everyone.