Skip to main content

Overview

The OpenRouter harness provides access to multiple LLM providers through a single API. It routes requests to different providers (OpenAI, Anthropic, Google, etc.) based on the model specified.

Import

import { createGeneratorHarness, openRouterHarness } from "@llm-gateway/ai/harness/providers/openrouter";

Function Signature

function createGeneratorHarness(
  apiKeyOrOptions?: string | OpenRouterHarnessOptions
): GeneratorHarnessModule

Parameters

apiKeyOrOptions
string | OpenRouterHarnessOptions
API key string or configuration object
apiKey
string
OpenRouter API key. Falls back to OPENROUTER_API_KEY environment variable
model
string
Default model to use if not specified at invoke time

Returns

GeneratorHarnessModule
object
A harness module with invoke() and supportedModels() methods

What It Does

The OpenRouter harness makes single LLM API calls and yields events for:
  • text: Streamed output text content
  • reasoning: Streamed reasoning content (for supported models)
  • tool_call: Function calls from the model
  • error: Any errors that occur
It does NOT:
  • Execute tools (that’s the agent wrapper’s job)
  • Handle permissions (that’s the agent wrapper’s job)
  • Loop after tool calls (that’s the agent wrapper’s job)

Basic Example

import { createGeneratorHarness } from "@llm-gateway/ai/harness/providers/openrouter";

// Create with API key
const harness = createGeneratorHarness({
  apiKey: process.env.OPENROUTER_API_KEY,
  model: "anthropic/claude-sonnet-4",
});

// Make a single LLM call
for await (const event of harness.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "Hello!" }],
})) {
  if (event.type === "text") {
    console.log(event.content);
  }
}

Using the Singleton

import { openRouterHarness } from "@llm-gateway/ai/harness/providers/openrouter";

// Use pre-configured singleton (reads OPENROUTER_API_KEY from env)
for await (const event of openRouterHarness.invoke({
  model: "openai/gpt-4",
  messages: [{ role: "user", content: "Hello!" }],
})) {
  if (event.type === "text") {
    console.log(event.content);
  }
}

Multi-Provider Access

Access different providers through model names:
// Use Claude via Anthropic
for await (const event of harness.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "Task 1" }],
})) {
  console.log(event);
}

// Use GPT-4 via OpenAI
for await (const event of harness.invoke({
  model: "openai/gpt-4o",
  messages: [{ role: "user", content: "Task 2" }],
})) {
  console.log(event);
}

// Use Gemini via Google
for await (const event of harness.invoke({
  model: "google/gemini-2.0-flash-exp",
  messages: [{ role: "user", content: "Task 3" }],
})) {
  console.log(event);
}

With Tools

import { z } from "zod";

const tools = [
  {
    name: "get_weather",
    description: "Get weather for a location",
    schema: z.object({
      location: z.string(),
      units: z.enum(["celsius", "fahrenheit"]).optional(),
    }),
  },
];

for await (const event of harness.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
  tools,
})) {
  if (event.type === "tool_call") {
    console.log(`Tool: ${event.name}`);
    console.log(`Input:`, event.input);
  }
}

Reasoning Models

Reasoning is supported for compatible models:
for await (const event of harness.invoke({
  model: "openai/o1",
  messages: [{ role: "user", content: "Solve this complex problem..." }],
})) {
  if (event.type === "reasoning") {
    console.log("[Reasoning]", event.content);
  }
  if (event.type === "text") {
    console.log("[Answer]", event.content);
  }
}

List Available Models

Get all models across providers:
const models = await harness.supportedModels();
console.log("Available models:", models);
// Returns: ["openai/gpt-4o", "anthropic/claude-sonnet-4", "google/gemini-2.0-flash-exp", ...]

Multimodal Support

for await (const event of harness.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "What's in this image?" },
        {
          type: "image",
          mediaType: "image/png",
          data: base64ImageData,
        },
      ],
    },
  ],
})) {
  if (event.type === "text") {
    console.log(event.content);
  }
}

Wrapping with Agent Harness

import { createAgentHarness } from "@llm-gateway/ai/harness/agent";
import { createGeneratorHarness } from "@llm-gateway/ai/harness/providers/openrouter";

// Wrap OpenRouter provider with agent capabilities
const agent = createAgentHarness({
  harness: createGeneratorHarness(),
  maxIterations: 10,
});

// Now supports tool execution and looping
for await (const event of agent.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "Research and summarize" }],
  tools: [searchTool, readTool],
})) {
  console.log(event);
}

Tool Result Format

OpenRouter uses camelCase for callId in function call outputs:
// Tool results are converted to:
{
  type: "function_call_output",
  callId: "call_123",  // Note: camelCase
  output: "Result..."
}

Error Handling

for await (const event of harness.invoke({
  model: "anthropic/claude-sonnet-4",
  messages: [{ role: "user", content: "Hello!" }],
})) {
  if (event.type === "error") {
    console.error("Error:", event.error.message);
  }
}

SDK Integration

Uses OpenRouter’s official SDK:
import { OpenRouter } from "@openrouter/sdk";

// The harness wraps the SDK's streaming API
// and converts events to the standard HarnessEvent format

Provider Routing

OpenRouter automatically routes requests based on model prefix:
  • openai/* → OpenAI
  • anthropic/* → Anthropic
  • google/* → Google
  • meta-llama/* → Meta
  • And many more…
See OpenRouter models for full list.

Architecture Notes

  • Uses OpenRouter SDK for streaming
  • Tracks tool calls from function_call_arguments.done events
  • Handles reasoning and text deltas separately
  • Assistant tool_calls are tracked internally by SDK
  • Single iteration only - compose with agent harness for loops

Build docs developers (and LLMs) love