Skip to main content
The AI Response feature registers a /ai slash command that generates a GPT-powered reply directly inside a Discord thread. The bot reads the thread’s message history, optionally accepts a custom prompt, then streams a response using the gpt-5.2-codex model. Only members with the Manage Messages permission can invoke the command.
The /ai command only works inside threads (announcement, public, or private). Using it in a regular channel returns an ephemeral error message.

How it works

1

Run /ai in a thread

Use the /ai command, optionally passing a prompt string and a reasoning level (low, medium, or high). The default reasoning level is medium.
2

Bot starts thinking

The bot replies with an ephemeral message — “The clanker is thinking…” — and a Cancel button. The message updates with a tool-call counter as the model works.
3

Review the response

When the model finishes, the ephemeral message is replaced with the generated text and an Accept button.
4

Post or discard

Click Accept to post the response as a regular message in the thread, or Cancel at any time to stop generation and dismiss the ephemeral message.

AI tools

The model has access to three tools that let it read and search the effect-smol repository at generation time.

read

Reads a file (or a line range) from the Effect repository.

rg

Runs ripgrep inside the repository to search for patterns.

glob

Finds files matching a glob pattern inside the repository.
AiResponse.ts
const Tools = Toolkit.make(
  Tool.make("read", {
    description: "Read a file from the effect repository",
    parameters: Schema.Struct({
      path: Schema.String.annotate({
        description:
          "The path to the file to read, relative to the root of the repository",
      }),
      startLine: Schema.optionalKey(Schema.Number).annotate({
        description: "The line number to start reading from (inclusive)",
      }),
      endLine: Schema.optionalKey(Schema.Number).annotate({
        description: "The line number to stop reading at (exclusive)",
      }),
    }),
    success: Schema.String,
  }),
  Tool.make("rg", {
    description: "Wrapper around the 'rg' command.",
    parameters: Schema.Struct({
      pattern: Schema.String.annotate({
        description: "The pattern to pass to rg.",
      }),
      glob: Schema.optionalKey(Schema.String).annotate({
        description: "The --glob option to rg",
      }),
      maxLines: Schema.Finite.annotate({
        description:
          "The total maximum number of lines to return across all files",
      }),
    }),
    success: Schema.String,
  }),
  Tool.make("glob", {
    description: "Find files in the effect repository matching a glob pattern",
    parameters: Schema.Struct({
      pattern: Schema.String.annotate({
        description: "The glob pattern to match files against (e.g. '**/*.ts')",
      }),
    }),
    success: Schema.String,
  }),
)

Effect repository context

The bot clones effect-smol on startup and pulls the latest changes every 15 minutes so responses always reflect current source code.
EffectRepo.ts
// Pull the repo every 15 minutes to keep it up to date
yield* Effect.gen(function* () {
  const repoPath = yield* repo
  while (true) {
    yield* Effect.sleep("15 minutes")
    yield* git.pull(repoPath)
    yield* RcRef.invalidate(llmsMd)
  }
}).pipe(Effect.retry(Schedule.forever), Effect.forkScoped)
The system prompt also includes the full contents of LLMS.md from the repository root so the model understands the project’s API conventions before diving into individual files.

Models

PurposeModel
Response generationgpt-5.2-codex
Title generation (AutoThreads)gpt-5.2

Configuration

Environment variableRequiredDescription
OPENAI_API_KEYYesOpenAI API key used for all AI features.

Permissions

The /ai command requires the Manage Messages (ManageMessages) permission. Members without this permission cannot invoke it.

Build docs developers (and LLMs) love