Skip to main content
The Agents API provides full lifecycle management for OpenFang agents: spawning, messaging, session management, configuration updates, and termination.

List Agents

curl http://127.0.0.1:4200/api/agents
agents
array
List of running agents

Get Agent Details

curl http://127.0.0.1:4200/api/agents/{id}
id
string
required
Agent UUID
agent
object
id
string
Agent UUID
name
string
Agent name
state
string
Agent state
session_id
string
Current session UUID
model
object
provider
string
LLM provider
model
string
Model ID
capabilities
object
tools
array
Allowed tool names
network
array
Network access rules
description
string
Agent description
tags
array
Agent tags
identity
object
emoji
string
Agent emoji
avatar_url
string
Avatar URL
color
string
Hex color code

Spawn Agent

Create a new agent from a TOML manifest.
curl -X POST http://127.0.0.1:4200/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "manifest_toml": "name = \"my-agent\"\nversion = \"0.1.0\"\ndescription = \"Test agent\"\nauthor = \"me\"\nmodule = \"builtin:chat\"\n\n[model]\nprovider = \"groq\"\nmodel = \"llama-3.3-70b-versatile\"\n\n[capabilities]\ntools = [\"file_read\", \"web_fetch\"]\nmemory_read = [\"*\"]\nmemory_write = [\"self.*\"]\n"
  }'
manifest_toml
string
required
Agent manifest in TOML format (max 1MB)
signed_manifest
string
Optional Ed25519-signed manifest JSON for verification
agent_id
string
UUID of the newly spawned agent
name
string
Agent name
{
  "agent_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "name": "my-agent"
}

Send Message

Send a message to an agent and receive the complete response.
curl -X POST http://127.0.0.1:4200/api/agents/{id}/message \
  -H "Content-Type: application/json" \
  -d '{"message": "What files are in the current directory?"}'
id
string
required
Agent UUID
message
string
required
Message text (max 64KB)
attachments
array
Optional file attachments
response
string
Agent’s complete text response
input_tokens
integer
Total input tokens consumed
output_tokens
integer
Total output tokens generated
iterations
integer
Number of LLM iterations (agentic loops)
cost_usd
number
Estimated cost in USD
{
  "response": "Here are the files in the current directory:\n- Cargo.toml\n- README.md\n- src/\n",
  "input_tokens": 142,
  "output_tokens": 87,
  "iterations": 1,
  "cost_usd": 0.0012
}

Stream Message (SSE)

Send a message and receive a token-by-token streaming response.
curl -X POST http://127.0.0.1:4200/api/agents/{id}/message/stream \
  -H "Content-Type: application/json" \
  -d '{"message": "Explain quantum computing"}'
id
string
required
Agent UUID
message
string
required
Message text (max 64KB)
SSE Event Types:
chunk
event
Text delta from the LLM
{"content": "Quantum", "done": false}
tool_use
event
Agent is invoking a tool
{"tool": "web_search"}
tool_result
event
Tool invocation completed
{"tool": "web_search", "input": {"query": "quantum computing basics"}}
done
event
Final event with token usage
{"done": true, "usage": {"input_tokens": 150, "output_tokens": 340}}

Get Session History

Retrieve an agent’s conversation history.
curl http://127.0.0.1:4200/api/agents/{id}/session
id
string
required
Agent UUID
session_id
string
Session UUID
agent_id
string
Agent UUID
message_count
integer
Total messages in session
context_window_tokens
integer
Current context window size
label
string
Optional session label
messages
array
Conversation messages

Update Agent Configuration

Update an agent’s description, system prompt, or tags at runtime.
curl -X PUT http://127.0.0.1:4200/api/agents/{id}/update \
  -H "Content-Type: application/json" \
  -d '{
    "description": "Updated description",
    "system_prompt": "You are a specialized assistant.",
    "tags": ["updated", "v2"]
  }'
id
string
required
Agent UUID
description
string
New description
system_prompt
string
New system prompt
tags
array
New tag list
{
  "status": "updated",
  "agent_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}

Set Agent Mode

Switch an agent between Normal and Stable modes. Stable mode:
  • Pins the current model (prevents auto-upgrades)
  • Freezes the skill registry (no new skills)
  • Useful for production deployments
curl -X PUT http://127.0.0.1:4200/api/agents/{id}/mode \
  -H "Content-Type: application/json" \
  -d '{"mode": "Stable"}'
id
string
required
Agent UUID
mode
string
required
Normal or Stable

Switch Model

Change an agent’s LLM model at runtime.
curl -X PUT http://127.0.0.1:4200/api/agents/{id}/model \
  -H "Content-Type: application/json" \
  -d '{"model": "claude-sonnet-4-20250514"}'
id
string
required
Agent UUID
model
string
required
Model ID or alias (e.g., sonnet, gpt4, llama)
{
  "status": "updated",
  "model": "claude-sonnet-4-20250514"
}

Reset Session

Clear an agent’s conversation history.
curl -X POST http://127.0.0.1:4200/api/agents/{id}/session/reset
id
string
required
Agent UUID
{
  "status": "reset",
  "agent_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "new_session_id": "s5e6f7g8-h9i0-1234-jklm-nopqrstuv567"
}

Compact Session

Trigger LLM-based session compaction (summarizes old messages to reduce context window usage).
curl -X POST http://127.0.0.1:4200/api/agents/{id}/session/compact
id
string
required
Agent UUID
{
  "status": "compacted",
  "message": "Session compacted: 80 messages summarized, 20 kept"
}

Stop Agent Run

Cancel the agent’s current LLM generation.
curl -X POST http://127.0.0.1:4200/api/agents/{id}/stop
id
string
required
Agent UUID
{
  "status": "stopped",
  "message": "Agent run cancelled"
}

Delete Agent

Terminate an agent and remove it from the registry.
curl -X DELETE http://127.0.0.1:4200/api/agents/{id}
id
string
required
Agent UUID
{
  "status": "killed",
  "agent_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}

WebSocket Connection

Connect to an agent via WebSocket for real-time bidirectional chat.
const ws = new WebSocket('ws://127.0.0.1:4200/api/agents/{id}/ws')

ws.onopen = () => {
  console.log('Connected to agent')
  ws.send(JSON.stringify({ type: 'message', content: 'Hello!' }))
}

ws.onmessage = (event) => {
  const msg = JSON.parse(event.data)
  if (msg.type === 'text_delta') {
    console.log(msg.content) // Stream tokens as they arrive
  } else if (msg.type === 'response') {
    console.log('Complete:', msg.content)
  }
}
Message types from server:
  • connected — Connection established
  • thinking — Agent started processing
  • text_delta — Streaming token
  • tool_start — Tool invocation started
  • response — Complete response with usage stats
  • error — Error occurred
  • agents_updated — Agent list update (sent every 5s)
Message types from client:
  • {"type": "message", "content": "..."} — Send message
  • {"type": "ping"} — Keepalive ping
  • Plain text (non-JSON) — Treated as message
Chat commands (send as messages with / prefix):
  • /new — Start new session
  • /compact — Compact session
  • /model <name> — Switch model
  • /stop — Cancel current run
  • /usage — Show token usage
  • /think — Toggle extended thinking
  • /models — List models
  • /providers — List providers

Next Steps

Workflows

Orchestrate multi-agent workflows

Memory

Store and retrieve agent memory

Skills

Extend agent capabilities

Channels

Connect to messaging platforms

Build docs developers (and LLMs) love