Skip to main content

Overview

The AgentOrchestrator class manages multiple AI agents running concurrently. It handles agent spawning, event multiplexing, and relay request management for permission-based tool execution.

Key Features

  • Spawn multiple agents that run concurrently
  • Multiplex events from all agents into a single stream
  • Manage relay requests for permission-based tool execution
  • Support for agent hierarchies with subagent spawning
  • Automatic pause/resume when awaiting relay responses

Constructor

const orchestrator = new AgentOrchestrator(harness?);
harness
GeneratorHarnessModule
Optional harness module. Defaults to the agent harness wrapping OpenRouter.

Methods

spawn()

Spawn a new agent and begin event production.
const agentId = orchestrator.spawn({
  model: "openai/gpt-4",
  messages: [{ role: "user", content: "Hello" }],
  tools: [myTool],
  permissions: { allowlist: [] },
});
params
GeneratorInvokeParams
required
Parameters for agent invocation
params.model
string
The LLM model to use (e.g., "openai/gpt-4", "anthropic/claude-3-5-sonnet")
params.messages
Message[]
required
Array of conversation messages
params.tools
ToolDefinition[]
Available tools for the agent to use
params.permissions
Permissions
Tool execution permissions with allowlist/allowOnce/deny rules
params.context
string
Additional context to inject into the conversation
params.env
object
Environment context including parentId, spawn function, and fileTime
agentId
string
Unique identifier for the spawned agent (UUID v7)

events()

Stream all events from all registered agents.
for await (const { agentId, event } of orchestrator.events()) {
  if (event.type === "relay") {
    // Handle permission request
    orchestrator.resolveRelay(event.id, { approved: true });
  } else if (event.type === "text") {
    console.log(`${agentId}: ${event.content}`);
  }
}
agentId
string
The agent that produced this event
event
ConsumerHarnessEvent
The event data. Relay events have the respond callback stripped - use resolveRelay() instead.

resolveRelay()

Resolve a pending relay request and resume the agent.
// Approve once
orchestrator.resolveRelay(relayId, { approved: true });

// Approve and always allow this tool
orchestrator.resolveRelay(relayId, { approved: true, always: true });

// Deny with reason
orchestrator.resolveRelay(relayId, { approved: false, reason: "Not allowed" });
relayId
string
required
The ID of the relay event awaiting resolution
response
ResolveResponse
required
The resolution response
response.approved
boolean
required
Whether to approve or deny the tool execution
response.always
boolean
If true, derive a permission and add to allowlist for future auto-approval
response.reason
string
Reason for denial (only used when approved is false)
success
boolean
Returns true if the relay was found and resolved, false otherwise

kill()

Terminate an agent and clean up its resources.
orchestrator.kill(agentId);
agentId
string
required
The ID of the agent to terminate

Types

PendingRelay

Internal state for a relay awaiting resolution.
interface PendingRelay {
  agentId: string;
  tool: string;
  params: Record<string, unknown>;
  permissions?: Permissions;
  tools?: ToolDefinition[];
  respond: (response: any) => void;
}

ResolveResponse

Response type for relay resolution.
type ResolveResponse =
  | { approved: true; always?: boolean }
  | { approved: false; reason?: string };

ConsumerHarnessEvent

HarnessEvent with relay respond callbacks stripped.
type ConsumerHarnessEvent =
  | Exclude<HarnessEvent, { type: "relay" }>
  | {
      type: "relay";
      kind: "permission";
      runId: string;
      id: string;
      parentId?: string;
      toolCallId: string;
      tool: string;
      params: Record<string, unknown>;
    };

Complete Example

import { AgentOrchestrator } from "@llm-gateway/ai";
import { myTool } from "./tools";

const orchestrator = new AgentOrchestrator();

// Spawn multiple agents
const agent1 = orchestrator.spawn({
  model: "openai/gpt-4",
  messages: [{ role: "user", content: "Analyze this data" }],
  tools: [myTool],
  permissions: { allowlist: [{ tool: "read", params: { path: "data/*" } }] },
});

const agent2 = orchestrator.spawn({
  model: "anthropic/claude-3-5-sonnet",
  messages: [{ role: "user", content: "Write a summary" }],
});

// Process events from all agents
for await (const { agentId, event } of orchestrator.events()) {
  switch (event.type) {
    case "relay":
      // Permission requested - approve or deny
      const shouldApprove = await checkPermission(event.tool, event.params);
      orchestrator.resolveRelay(event.id, {
        approved: shouldApprove,
        always: true, // Auto-approve future matching calls
      });
      break;

    case "text":
      console.log(`[${agentId}] ${event.content}`);
      break;

    case "tool_call":
      console.log(`[${agentId}] Calling ${event.name}`);
      break;

    case "error":
      console.error(`[${agentId}] Error:`, event.error);
      orchestrator.kill(agentId);
      break;
  }
}

Relay Flow

When an agent needs permission to execute a tool:
  1. Agent is automatically paused
  2. Relay event is yielded with tool details
  3. The respond callback is stashed internally
  4. Consumer calls resolveRelay() to approve/deny
  5. Agent is resumed and continues execution

Always-Allow Pattern

When resolving with always: true, the orchestrator:
  1. Calls the tool’s derivePermission() function if available
  2. Adds the derived permission to the agent’s allowlist
  3. Future matching tool calls are auto-approved without relay events
const readTool: ToolDefinition = {
  name: "read",
  derivePermission: (params) => ({
    tool: "read",
    params: { path: params.path }, // Specific file path
  }),
  // ...
};

Build docs developers (and LLMs) love