Skip to main content
The slideshowAssistant procedure sends a user message to the Anthropic Claude API and returns a structured patch result that describes the changes the assistant wants to make to the current slideshow or slide. The assistant does not mutate data directly — it returns a patchResult that your application applies to the local state.
This procedure requires a valid Anthropic API key configured on the server via the ANTHROPIC_API_KEY environment variable. If the key is missing, the procedure returns an API_KEY_MISSING error.

Request

POST /rpc/slideshowAssistant

Input

slideshows
Slideshow[]
required
All current slideshows. The assistant uses this to understand the full context of the presentation. See the Slideshow type reference.
currentSlideshowIndex
integer
required
Zero-based index of the active slideshow within the slideshows array. The assistant focuses its changes on this slideshow.
currentSlide
integer
required
Zero-based index of the active slide within the current slideshow’s slides array. Used to target slide-level edits.
userInput
string
required
The natural language request from the user. For example: "Add a KPI block showing revenue of $2.4M" or "Rewrite the explainer to be more concise".
previousMessages
AssistantConversationMessage[]
Optional conversation history for multi-turn interactions. Each message has a role ("user", "assistant", or "system") and a content string.Pass the messages from previous turns to maintain context across multiple requests. The current userInput is automatically appended — do not include it here.
useFullContext
boolean
required
Controls how much of the slideshow is sent to the AI model:
  • true — sends the full slideshow (all slides and concepts) as context. Use this for structural changes, adding new slides, or requests that reference multiple slides.
  • false — sends only the current slide. Use this for targeted edits to a single slide. Results in lower token usage and faster responses.

Response

assistantText
string
required
The raw text response from the AI model, including any reasoning or explanation the assistant provided alongside the patch.
wasTruncated
boolean
required
true if the model’s response was cut off because it hit the max_tokens limit. If truncated, the patchResult may be incomplete or invalid.
digest
object
required
A snapshot of the slideshow state at the time the request was processed. Used to detect drift when applying the returned patch.
patchResult
object
required
The outcome of the assistant’s attempt to generate a patch. This is a discriminated union with three possible statuses.

Errors

CodeDescription
API_KEY_MISSINGNo Anthropic API key is configured on the server. Set ANTHROPIC_API_KEY in the server environment.
ANTHROPIC_API_ERRORThe Anthropic API returned an error. The error data includes status (HTTP status code) and details.
ANTHROPIC_RATE_LIMITEDThe Anthropic API rate limit was exceeded. The error data includes retryAfter (seconds until retry is allowed, if provided by Anthropic).
INVALID_RESPONSEThe AI service returned a response with no text content. The error data includes reason.
PATCH_GENERATION_FAILEDThe assistant’s response could not be turned into a valid patch result. The error data includes reason.

Examples

import { createORPCClient } from "@orpc/client";
import { RPCLink } from "@orpc/client/fetch";
import type { AppRouter } from "@slides/api/routers/index";
import type { AssistantConversationMessage } from "@slides/core";

const client = createORPCClient<AppRouter>(
  new RPCLink({ url: "http://localhost:3000/rpc" })
);

// First turn — single request with full context
const response = await client.slideshowAssistant({
  slideshows: currentSlideshows,
  currentSlideshowIndex: 0,
  currentSlide: 2,
  userInput: "Add a KPI showing monthly active users of 84,000",
  useFullContext: false, // editing only this slide
});

if (response.patchResult.status === "ok") {
  // Apply the patch to local state
  applyPatch(currentSlideshows, response.patchResult.transaction);
}

// Multi-turn — pass conversation history for follow-up requests
const history: AssistantConversationMessage[] = [
  { role: "user", content: "Add a KPI showing monthly active users of 84,000" },
  { role: "assistant", content: response.assistantText },
];

const followUp = await client.slideshowAssistant({
  slideshows: currentSlideshows,
  currentSlideshowIndex: 0,
  currentSlide: 2,
  userInput: "Change the label to \"MAU\" and add a +12% month-over-month change",
  previousMessages: history,
  useFullContext: false,
});

Best practices

Choosing useFullContext Use useFullContext: false (current slide only) when:
  • Editing content on a single slide (adding blocks, rewording text)
  • Making targeted style or data changes
  • You want faster responses and lower token consumption
Use useFullContext: true (full slideshow) when:
  • Adding a new slide or reordering slides
  • Asking the assistant to maintain consistency across slides (e.g., matching a concept)
  • The request references other slides (e.g., “make this slide consistent with slide 3”)
Managing conversation history Pass previousMessages to give the assistant context from prior turns. Build the array by alternating user and assistant messages, where each assistant entry is the assistantText from the prior response. This keeps the conversation coherent for multi-step editing sessions. Handling patchResult statuses Always check patchResult.status before attempting to apply changes:
  • "ok" — apply the transaction to your state
  • "invalid" — show the user an error; log errors for debugging
  • "noop" — inform the user the assistant made no changes (and optionally why)

Build docs developers (and LLMs) love