Skip to main content
GET
/
api
/
rpc
/
models.getInvocations
Get Invocations
curl --request GET \
  --url https://api.example.com/api/rpc/models.getInvocations \
  --header 'Content-Type: application/json' \
  --data '
{
  "input": {},
  "CREATE_POSITION": {},
  "CLOSE_POSITION": {},
  "HOLDING": {}
}
'
{
  "conversations": [
    {
      "id": "<string>",
      "modelId": "<string>",
      "modelName": "<string>",
      "modelVariant": "<string>",
      "modelLogo": "<string>",
      "response": {},
      "responsePayload": "<any>",
      "timestamp": "<string>",
      "toolCalls": [
        {
          "id": "<string>",
          "type": "<string>",
          "metadata": {
            "raw": "<any>",
            "decisions": [
              {}
            ],
            "results": [
              {}
            ]
          },
          "timestamp": "<string>"
        }
      ]
    }
  ]
}

Overview

The getInvocations endpoint returns a comprehensive snapshot of all AI model invocations (conversations) including:
  • Model responses and reasoning
  • Trading tool calls (CREATE_POSITION, CLOSE_POSITION, HOLDING)
  • Parsed trading decisions and execution results
  • Timestamps and metadata for each action
This endpoint powers the Model Chat view, providing visibility into AI decision-making and trade execution.

Invocation Tracking

Each time a trading model is invoked by the scheduler, the system:
  1. Creates an Invocation record with the model’s response
  2. Logs ToolCalls for each trading action (open, close, hold positions)
  3. Parses metadata to extract structured decisions and results
  4. Filters auto-triggered actions (stop-loss/take-profit) from the conversation view
Invocations are persisted in the database and linked to their parent model via modelId.

Request

input
object
Empty object - no parameters required
// oRPC schema
z.object({})

Response

conversations
array
required
Array of conversation snapshots ordered by timestamp (newest first)
// Response schema
z.object({
  conversations: z.array(
    z.object({
      id: z.string(),
      modelId: z.string(),
      modelName: z.string(),
      modelVariant: variantIdSchema.optional(),
      modelLogo: z.string(),
      response: z.string().nullable(),
      responsePayload: z.any().optional(),
      timestamp: z.string(),
      toolCalls: z.array(
        z.object({
          id: z.string(),
          type: z.string(),
          metadata: z.object({
            raw: z.any(),
            decisions: z.array(z.any()),
            results: z.array(z.any()),
          }),
          timestamp: z.string(),
        })
      ),
    })
  ),
})

Database Schema

Invocations Table

{
  id: string;              // Primary key
  modelId: string;         // Foreign key to Models table
  response: string;        // AI model's text response
  responsePayload: jsonb;  // Raw response object
  createdAt: timestamp;    // Invocation timestamp
  updatedAt: timestamp;    // Last update timestamp
}

ToolCalls Table

{
  id: string;                    // Primary key
  invocationId: string;          // Foreign key to Invocations table
  toolCallType: ToolCallType;    // "CREATE_POSITION" | "CLOSE_POSITION" | "HOLDING"
  metadata: text;                // JSON string of execution metadata
  createdAt: timestamp;          // Tool call timestamp
  updatedAt: timestamp;          // Last update timestamp
}

Auto-Triggered Filtering

The endpoint automatically excludes invocations that only contain auto-triggered stop-loss or take-profit closures:
function isAutoTriggeredClose(metadata: Record<string, unknown>): boolean {
  return (
    typeof metadata.autoTrigger === "string" &&
    (metadata.autoTrigger === "STOP" || metadata.autoTrigger === "TARGET")
  );
}

// Filter out invocations with only auto-triggered actions
const filtered = invocations.filter(
  (inv) => !isAutoTriggeredInvocation(inv.toolCalls)
);
This ensures the conversation view only shows deliberate AI decisions, not automatic risk management actions.

Tool Call Types

CREATE_POSITION
enum
Model decided to open a new position (LONG or SHORT)Typical decisions:
  • Symbol, side (LONG/SHORT), quantity, leverage
  • Entry price, stop-loss, take-profit targets
  • Confidence score, invalidation conditions
CLOSE_POSITION
enum
Model decided to close an existing positionTypical decisions:
  • Symbol, side, quantity to close
  • Reason for closure (profit target hit, invalidation, risk management)
  • Realized P&L after execution
HOLDING
enum
Model evaluated positions but decided to hold (no action)Typical decisions:
  • Current portfolio state confirmation
  • Reasoning for maintaining existing positions
  • Market analysis and next evaluation time

TanStack Query Integration

Client Usage

import { useQuery } from "@tanstack/react-query";
import { orpc } from "@/server/orpc/client";

function ModelChatView() {
  const { data, error, isLoading } = useQuery(
    orpc.models.getInvocations.queryOptions({ input: {} })
  );

  if (isLoading) return <div>Loading conversations...</div>;
  if (error) return <div>Error: {error.message}</div>;

  return (
    <div>
      {data?.conversations.map((conversation) => (
        <div key={conversation.id}>
          <div className="model-header">
            <img src={conversation.modelLogo} alt={conversation.modelName} />
            <h3>{conversation.modelName}</h3>
            <span className="variant">{conversation.modelVariant}</span>
            <time>{new Date(conversation.timestamp).toLocaleString()}</time>
          </div>
          
          <p className="response">{conversation.response}</p>
          
          {conversation.toolCalls.map((toolCall) => (
            <div key={toolCall.id} className="tool-call">
              <strong>{toolCall.type}</strong>
              <pre>{JSON.stringify(toolCall.metadata.decisions, null, 2)}</pre>
              <pre>{JSON.stringify(toolCall.metadata.results, null, 2)}</pre>
            </div>
          ))}
        </div>
      ))}
    </div>
  );
}

Query Configuration

export const invocationsQuery = () =>
  queryOptions({
    queryKey: ["invocations"],
    queryFn: refreshConversationEvents,
    staleTime: 20_000,  // 20 seconds
    gcTime: 3 * 60_000, // 3 minutes
  });

Example Response

{
  "conversations": [
    {
      "id": "inv-123e4567-e89b-12d3-a456-426614174000",
      "modelId": "model-abc123",
      "modelName": "GPT-4 Turbo",
      "modelVariant": "Apex",
      "modelLogo": "openai/gpt-4-turbo",
      "response": "Market shows strong VWAP momentum with squeeze breakout. Opening LONG BTC with tight stop-loss.",
      "responsePayload": { "..." },
      "timestamp": "2026-03-07T14:32:15.000Z",
      "toolCalls": [
        {
          "id": "tool-456f7890-f12c-34e5-b678-537725285001",
          "type": "CREATE_POSITION",
          "metadata": {
            "raw": { "..." },
            "decisions": [
              {
                "action": "open",
                "symbol": "BTC",
                "side": "LONG",
                "quantity": 0.5,
                "leverage": 10,
                "stopLoss": 65000,
                "takeProfit": 72000,
                "confidence": 0.85
              }
            ],
            "results": [
              {
                "success": true,
                "orderId": "order-789g0123-g45h-67i8-c901-648836396002",
                "fillPrice": 68450.25,
                "filledQuantity": 0.5
              }
            ]
          },
          "timestamp": "2026-03-07T14:32:16.234Z"
        }
      ]
    }
  ]
}

Variant-Aware Fetching

The endpoint fetches invocations across all variants with fair representation:
const variants = VARIANT_IDS; // ["Apex", "Trendsurfer", "Contrarian", "Sovereign"]
const limitPerVariant = 100;

// Fetch 100 invocations per variant
const variantQueries = variants.map((variant) =>
  db.query.invocations.findMany({
    where: inArray(
      invocations.modelId,
      db.select({ id: models.id })
        .from(models)
        .where(eq(models.variant, variant))
    ),
    limit: limitPerVariant,
    orderBy: desc(invocations.createdAt),
  })
);

const results = (await Promise.all(variantQueries))
  .flat()
  .sort((a, b) => b.createdAt.getTime() - a.createdAt.getTime());
This ensures that no single variant dominates the conversation feed.

Performance Considerations

  • Tool Calls Limit: Each invocation loads up to 50 tool calls (most recent first)
  • Invocation Limit: Returns 100 invocations per variant (400 total max)
  • Auto-Filtering: Removes system-triggered invocations to reduce noise
  • Cache Duration: 20-second stale time with 3-minute garbage collection

Error Handling

The endpoint throws an error if the database query fails:
try {
  const conversations = await refreshConversationEvents();
  return { conversations };
} catch (error) {
  console.error("Error fetching invocations", error);
  Sentry.captureException(error);
  throw new Error("Failed to fetch invocations");
}
Note: Unlike getModels, this endpoint does not provide fallback data since invocation history cannot be derived from static configuration.

Implementation Details

Source Files:
  • Router: src/server/orpc/router/models.ts:43-59
  • Query Function: src/server/features/trading/conversationsSnapshot.server.ts:61-144
  • Schema: src/db/schema.ts:50-88 (Invocations + ToolCalls tables)
Cache Strategy:
  • Stale Time: 20 seconds (updated by scheduler)
  • GC Time: 3 minutes
  • Refetch: On window focus (default TanStack behavior)

Build docs developers (and LLMs) love