Skip to main content

Overview

This example demonstrates how to build a content moderation system with branching logic. Based on AI classification, content follows different paths: safe content gets published, suspicious content goes to review, and dangerous content triggers blocking.

Content Moderation Pipeline

This pipeline classifies content and routes it through different workflows based on risk level.
const moderator = stepkit<{ content: string; userId: string }>()
  .step('classify-content', async ({ content }) => {
    const { text } = await generateText({
      model: openai('gpt-4.1'),
      prompt: `Classify content as safe, suspicious, or dangerous.\n\n${content}`,
    })
    return { riskLevel: text.trim().toLowerCase() as 'safe' | 'suspicious' | 'dangerous' }
  })
  .branchOn(
    'policy-route',
    {
      name: 'safe',
      when: ({ riskLevel }) => riskLevel === 'safe',
      then: (b) =>
        b.step('publish', async () => ({ action: 'published' as const })),
    },
    {
      name: 'suspicious',
      when: ({ riskLevel }) => riskLevel === 'suspicious',
      then: (b) =>
        b
          .step('queue-review', async () => ({ reviewTicketId: await createReviewTicket() }))
          .step('notify-moderators', async ({ reviewTicketId }) => ({
            moderatorNotified: await notifyModerators(reviewTicketId),
          }))
          .step('hold', () => ({ action: 'held-for-review' as const })),
    },
    {
      name: 'dangerous',
      default: (b) =>
        b
          .step('block-user', async ({ userId }) => ({ blocked: await blockUser(userId) }))
          .step('send-user-email', async ({ blocked }) => ({
            userMessaged: blocked ? await sendUserEmail('Your content was blocked') : false,
          }))
          .step('notify-admin', async () => ({ adminNotified: await notifyAdmin() }))
          .step('finalize', () => ({ action: 'blocked' as const })),
    },
  )
  .transform('format', ({ action, reviewTicketId, moderatorNotified, adminNotified }) => ({
    status: action,
    reviewTicketId,
    moderatorNotified,
    adminNotified,
  }))

await moderator.run({ content: 'Check this out!' })

How It Works

1. Classification

The first step uses AI to classify content into risk levels:
  • Safe: Ready to publish
  • Suspicious: Needs human review
  • Dangerous: Requires immediate blocking

2. Branching Logic

The branchOn method routes content through different workflows:
1

Safe Content

Content is immediately published without additional checks.
2

Suspicious Content

Creates a review ticket, notifies moderators, and holds the content for manual review.
3

Dangerous Content (default)

Blocks the user, sends notification email, alerts admins, and prevents publication.

3. Context Transform

The final transform step cleans the context, removing intermediate fields and keeping only the relevant status information.

Branch Features

Each branch has a name for debugging and logging. Branch names appear in step names like policy-route/safe/publish.
Use when conditions to control which branch executes. Only one branch runs per execution.
The default branch executes when no other conditions match, providing a fallback path.
All branches receive the same input context type and must return compatible output types.

Use Cases

  • User-generated content: Automatically moderate posts, comments, and uploads
  • Chat moderation: Filter messages in real-time communication
  • Content approval workflows: Route content through review queues
  • Automated compliance: Flag content that violates policies
Combine branching with human approval flows for cases where suspicious content needs manual review before final action.

AI Workflows

Learn about parallel execution and conditional steps

Data Processing

Explore transform and context management patterns

Build docs developers (and LLMs) love