Skip to main content
OpenFang ships with 30 pre-built agent templates across 4 performance tiers. This guide covers spawning agents from templates, creating custom agent manifests, and understanding the agent architecture.

Quick Start

Spawn any template from the CLI:
openfang spawn orchestrator
openfang spawn coder
openfang spawn --template agents/writer/agent.toml
Spawn via the REST API:
# Spawn from a built-in template name
curl -X POST http://localhost:4200/api/agents \
  -H "Content-Type: application/json" \
  -d '{"template": "coder"}'

# Spawn with overrides
curl -X POST http://localhost:4200/api/agents \
  -H "Content-Type: application/json" \
  -d '{"template": "writer", "model": "gemini-2.5-flash"}'

Agent Template Tiers

Templates are organized into 4 tiers based on task complexity and model capabilities:

Tier 1: Frontier

DeepSeek for deep reasoning: orchestration, architecture, securityModels: deepseek-chatAgents: orchestrator, architect, security-auditor

Tier 2: Smart

Gemini 2.5 Flash for coding, research, analysis, testingModels: gemini-2.5-flashAgents: coder, researcher, data-scientist, test-engineer

Tier 3: Balanced

Groq + Gemini fallback for business and productivityModels: llama-3.3-70b-versatilegemini-2.0-flashAgents: planner, writer, assistant, customer-support

Tier 4: Fast

Groq for lightweight, high-speed tasksModels: llama-3.3-70b-versatile, llama-3.1-8b-instantAgents: ops, translator, tutor, health-tracker

Orchestrator

Tier 1 | deepseek/deepseek-chat | Fallback: groq/llama-3.3-70b-versatile
Meta-agent that decomposes complex tasks, delegates to specialist agents, and synthesizes results.
openfang spawn orchestrator
# "Plan and execute a full security audit of the codebase"
Capabilities:
  • Analyzes requests and breaks them into subtasks
  • Discovers specialists using agent_list
  • Delegates via agent_send and spawns agents as needed
  • Synthesizes all responses into coherent answers
  • Explains delegation strategy before executing
Tools: agent_send, agent_spawn, agent_list, agent_kill, memory_store, memory_recall, file_read, file_write

Coder

Tier 2 | gemini/gemini-2.5-flash | Fallback: groq/llama-3.3-70b-versatile
Expert software engineer that reads, writes, and analyzes code.
openfang spawn coder
# "Implement a rate limiter using the token bucket algorithm in Rust"
Approach:
  • Reads files first to understand context
  • Makes precise, minimal changes
  • Always writes tests for produced code
  • Supports Rust, Python, JavaScript, and more
Tools: file_read, file_write, file_list, shell_exec Shell access: cargo *, rustc *, git *, npm *, python *

Security Auditor

Tier 1 | deepseek/deepseek-chat | Fallback: groq/llama-3.3-70b-versatile
Security specialist that reviews code for vulnerabilities and performs threat modeling.
openfang spawn security-auditor
# "Audit the authentication module for vulnerabilities"
Focus areas:
  • OWASP Top 10
  • Input validation and auth flaws
  • Cryptographic misuse
  • Injection attacks (SQL, XSS, command)
  • Secrets management
  • Race conditions and privilege escalation
Report format: Finding → Impact → Evidence → Remediation

Assistant

Tier 3 | groq/llama-3.3-70b-versatile | Fallback: gemini/gemini-2.0-flash
The versatile default agent for everyday tasks, questions, and conversations.
openfang spawn assistant
# "Help me plan my week and draft replies to these three emails"
Capabilities:
  • Conversational intelligence and task execution
  • Research and synthesis
  • Writing and communication
  • Problem solving
  • Agent delegation (routes to specialists)
  • Knowledge management

Creating Custom Agents

Agent Manifest Format

Create a custom agent by writing an agent.toml manifest:
agent.toml
# Required fields
name = "my-agent"
version = "0.1.0"
description = "What this agent does in one sentence."
author = "your-name"
module = "builtin:chat"

# Optional metadata
tags = ["tag1", "tag2"]

# Model configuration (required)
[model]
provider = "gemini"                  # Provider: gemini, deepseek, groq, openai, anthropic, etc.
model = "gemini-2.5-flash"           # Model identifier
api_key_env = "GEMINI_API_KEY"       # Env var holding the API key
max_tokens = 4096                    # Max output tokens per response
temperature = 0.3                    # Creativity (0.0 = deterministic, 1.0 = creative)
system_prompt = """Your agent's personality, capabilities, and instructions.
Be specific about what the agent should and should not do."""

# Optional fallback model
[[fallback_models]]
provider = "groq"
model = "llama-3.3-70b-versatile"
api_key_env = "GROQ_API_KEY"

# Optional schedule (for autonomous agents)
[schedule]
periodic = { cron = "every 5m" }                                     # Periodic execution
# continuous = { check_interval_secs = 120 }                         # Continuous loop
# proactive = { conditions = ["event:agent_spawned"] }               # Event-triggered

# Resource limits
[resources]
max_llm_tokens_per_hour = 150000    # Token budget per hour
max_concurrent_tools = 5            # Max parallel tool executions

# Capability grants (principle of least privilege)
[capabilities]
tools = ["file_read", "file_write", "file_list", "shell_exec",
         "memory_store", "memory_recall", "web_fetch",
         "agent_send", "agent_list", "agent_spawn", "agent_kill"]
network = ["*"]                     # Network access patterns
memory_read = ["*"]                 # Memory namespaces agent can read
memory_write = ["self.*"]           # Memory namespaces agent can write
agent_spawn = true                  # Can this agent spawn other agents?
agent_message = ["*"]               # Which agents can it message?
shell = ["python *", "cargo *"]     # Allowed shell command patterns (whitelist)

Available Tools

1

File Operations

  • file_read - Read file contents
  • file_write - Write/create files
  • file_list - List directory contents
2

System Access

  • shell_exec - Execute shell commands (restricted by whitelist)
3

Memory & Knowledge

  • memory_store - Persist key-value data
  • memory_recall - Retrieve data from memory
4

Network

  • web_fetch - Fetch content from URLs (SSRF-protected)
5

Multi-Agent

  • agent_send - Send a message to another agent
  • agent_list - List all running agents
  • agent_spawn - Spawn a new agent
  • agent_kill - Terminate a running agent

Best Practices

Start Minimal: Grant only the tools and capabilities the agent actually needs. You can always add more later.

System Prompt

The system prompt is the most important part of the template. Be specific about:
  • The agent’s role and methodology
  • Output format expectations
  • What the agent should and should not do
  • Limitations and disclaimers

Temperature Settings

  • 0.2 - Precise/analytical tasks (security audits, debugging)
  • 0.5 - Balanced tasks (general assistant, planning)
  • 0.7+ - Creative tasks (writing, brainstorming)

Shell Security

Never grant shell = ["*"]. Always whitelist specific command patterns.
Good examples:
shell = ["python *", "cargo test *", "git status", "git log *"]

Token Budgets

Use max_llm_tokens_per_hour to prevent runaway costs:
  • Start with 100,000 for most agents
  • Use 200,000+ for intensive tasks (research, code generation)
  • Use 50,000 for lightweight monitoring agents

Fallback Models

Add fallback models to handle rate limits and availability issues:
# Primary model
[model]
provider = "gemini"
model = "gemini-2.5-flash"

# Fallback if primary fails
[[fallback_models]]
provider = "groq"
model = "llama-3.3-70b-versatile"

Memory for Continuity

Grant memory_store and memory_recall so agents can persist context across sessions:
[capabilities]
tools = ["memory_store", "memory_recall"]
memory_read = ["*"]
memory_write = ["self.*", "shared.*"]

Managing Agents

Spawning Agents

# Spawn by template name
openfang spawn coder

# Spawn with a custom name
openfang spawn coder --name "backend-coder"

# Spawn from a TOML file path
openfang spawn --template agents/custom/my-agent.toml

# List running agents
openfang agents

# Kill an agent
openfang kill <agent-id>

Sending Messages

# CLI
openfang message <agent-id> "Write a function to parse TOML files"

# REST API
curl -X POST http://localhost:4200/api/agents/{id}/message \
  -H "Content-Type: application/json" \
  -d '{"content": "Implement the auth module"}'

# WebSocket (streaming)
ws://localhost:4200/api/agents/{id}/ws

OpenAI-Compatible API

Use any agent through the OpenAI-compatible endpoint:
curl -X POST http://localhost:4200/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openfang:coder",
    "messages": [{"role": "user", "content": "Write a Rust HTTP server"}],
    "stream": true
  }'

Environment Variables

Set API keys to enable model providers:
VariableProviderUsed By
DEEPSEEK_API_KEYDeepSeekTier 1 (orchestrator, architect, security-auditor)
GEMINI_API_KEYGoogle GeminiTier 2 primary, Tier 3 fallback
GROQ_API_KEYGroqTier 3 primary, Tier 1/2 fallback, Tier 4
At minimum, set GROQ_API_KEY to enable all Tier 3 and Tier 4 agents. Add GEMINI_API_KEY for Tier 2. Add DEEPSEEK_API_KEY for Tier 1 frontier agents.

Next Steps

Workflows

Chain agents together in multi-step pipelines

Skill Development

Extend agent capabilities with custom tools