Skip to main content
OpenFang provides official Docker images for containerized deployments. Run the entire Agent OS in an isolated environment with persistent data volumes.

Quick Start

1

Clone the repository

git clone https://github.com/RightNow-AI/openfang.git
cd openfang
2

Set up environment variables

Create a .env file with your API keys:
# LLM Provider Keys (at least one required)
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...

# Optional: Channel Adapters
TELEGRAM_BOT_TOKEN=
DISCORD_BOT_TOKEN=
SLACK_BOT_TOKEN=
SLACK_APP_TOKEN=
3

Build and run with docker-compose

docker compose up --build
The API server will be available at http://localhost:4200

Dockerfile Architecture

The official Dockerfile uses a multi-stage build for minimal image size:
# Build stage
FROM rust:1-slim-bookworm AS builder
WORKDIR /build
RUN apt-get update && apt-get install -y pkg-config libssl-dev && rm -rf /var/lib/apt/lists/*
COPY Cargo.toml Cargo.lock ./
COPY crates ./crates
COPY xtask ./xtask
COPY agents ./agents
COPY packages ./packages
RUN cargo build --release --bin openfang

# Runtime stage
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/openfang /usr/local/bin/
COPY --from=builder /build/agents /opt/openfang/agents
EXPOSE 4200
VOLUME /data
ENV OPENFANG_HOME=/data
ENTRYPOINT ["openfang"]
CMD ["start"]
Key details:
  • Base image: debian:bookworm-slim (~74 MB)
  • Build dependencies: pkg-config, libssl-dev
  • Runtime dependencies: ca-certificates (for HTTPS API calls)
  • Binary location: /usr/local/bin/openfang
  • Agent templates: /opt/openfang/agents
  • Default data volume: /data

docker-compose.yml

The provided docker-compose.yml configures a production-ready deployment:
version: "3.8"
services:
  openfang:
    build: .
    # image: ghcr.io/rightnow-ai/openfang:latest  # Use when GHCR is public
    ports:
      - "4200:4200"
    volumes:
      - openfang-data:/data
    environment:
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
      - OPENAI_API_KEY=${OPENAI_API_KEY:-}
      - GROQ_API_KEY=${GROQ_API_KEY:-}
      - TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN:-}
      - DISCORD_BOT_TOKEN=${DISCORD_BOT_TOKEN:-}
      - SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN:-}
      - SLACK_APP_TOKEN=${SLACK_APP_TOKEN:-}
    restart: unless-stopped

volumes:
  openfang-data:

Configuration Options

FieldDescriptionDefault
portsExpose API server4200:4200
volumesPersistent data (SQLite, config, logs)openfang-data:/data
environmentAPI keys and secretsRead from .env file
restartRestart policyunless-stopped

Port Configuration

Port 4200 is the default HTTP/WebSocket API server port.
  • API endpoint: http://localhost:4200/api/*
  • Web dashboard: http://localhost:4200/
  • WebSocket: ws://localhost:4200/ws
To use a different port:
ports:
  - "8080:4200"  # Expose container port 4200 on host port 8080
Or override via config:
environment:
  - OPENFANG_API_LISTEN=0.0.0.0:8080

Volume Mounting

The /data volume persists:
  • SQLite databases: Agent memory, conversation history
  • Config file: config.toml
  • Logs: Structured tracing output
  • Skill cache: Downloaded MCP servers and WASM modules

Bind Mount Alternative

For direct filesystem access:
volumes:
  - ./openfang-data:/data
This creates a ./openfang-data/ directory in your current working directory.

Inspect Volume Data

# List files in the volume
docker compose exec openfang ls -la /data

# Copy config out
docker compose cp openfang:/data/config.toml ./config.toml

# Copy config in
docker compose cp ./config.toml openfang:/data/config.toml

Environment Variables

All OpenFang configuration can be overridden via environment variables:

LLM Provider Keys

VariableProvider
ANTHROPIC_API_KEYAnthropic (Claude)
OPENAI_API_KEYOpenAI (GPT-4)
GROQ_API_KEYGroq (Llama)
GEMINI_API_KEYGoogle Gemini
DEEPSEEK_API_KEYDeepSeek
OPENROUTER_API_KEYOpenRouter

Channel Adapters

VariableChannel
TELEGRAM_BOT_TOKENTelegram
DISCORD_BOT_TOKENDiscord
SLACK_BOT_TOKENSlack
SLACK_APP_TOKENSlack (Socket Mode)

System Configuration

VariableDescriptionDefault
OPENFANG_HOMEHome directory/data
OPENFANG_API_LISTENAPI bind address127.0.0.1:4200
RUST_LOGLog levelopenfang=info
See Configuration Reference for the complete list.

Health Checks

Add a health check to ensure the API server is responsive:
services:
  openfang:
    # ... existing config ...
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:4200/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
The /api/health endpoint returns:
{
  "status": "healthy",
  "uptime_secs": 3600,
  "agents": 5
}

Running from Pre-Built Image

The GHCR image is not yet public. Track issue #12 for updates.
Once published, you can skip the build step:
services:
  openfang:
    image: ghcr.io/rightnow-ai/openfang:latest
    # Remove 'build: .' line
Or run directly:
docker run -d \
  -p 4200:4200 \
  -v openfang-data:/data \
  -e ANTHROPIC_API_KEY=sk-ant-... \
  --name openfang \
  ghcr.io/rightnow-ai/openfang:latest

Image Tags

TagDescription
latestLatest stable release
0.1.0Specific version
mainLatest commit on main branch (unstable)

Verifying the Build

1

Build the image

docker build -t openfang:local .
2

Check binary version

docker run --rm openfang:local --version
Output:
openfang 0.1.0
3

Start the server

docker run --rm -p 4200:4200 \
  -v openfang-data:/data \
  -e ANTHROPIC_API_KEY=sk-ant-... \
  openfang:local start
4

Test the API

curl http://localhost:4200/api/health

Multi-Architecture Support

The official images support:
  • linux/amd64 (x86_64)
  • linux/arm64 (Apple Silicon, ARM servers)
Docker automatically selects the correct architecture.

Logs and Debugging

View live logs

docker compose logs -f openfang

Increase log verbosity

environment:
  - RUST_LOG=openfang=debug,openfang_kernel=trace

Access logs from volume

docker compose exec openfang cat /data/logs/openfang.log

Resource Limits

Set memory and CPU limits for production:
services:
  openfang:
    # ... existing config ...
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '0.5'
          memory: 512M

Next Steps

Production Checklist

Prepare for production deployment

Configuration

Complete configuration reference

Build docs developers (and LLMs) love