Overview
This example demonstrates how to build a content moderation system with branching logic. Based on AI classification, content follows different paths: safe content gets published, suspicious content goes to review, and dangerous content triggers blocking.Content Moderation Pipeline
This pipeline classifies content and routes it through different workflows based on risk level.How It Works
1. Classification
The first step uses AI to classify content into risk levels:- Safe: Ready to publish
- Suspicious: Needs human review
- Dangerous: Requires immediate blocking
2. Branching Logic
ThebranchOn method routes content through different workflows:
Suspicious Content
Creates a review ticket, notifies moderators, and holds the content for manual review.
3. Context Transform
The finaltransform step cleans the context, removing intermediate fields and keeping only the relevant status information.
Branch Features
Named Branches
Named Branches
Each branch has a name for debugging and logging. Branch names appear in step names like
policy-route/safe/publish.Conditional Execution
Conditional Execution
Use
when conditions to control which branch executes. Only one branch runs per execution.Default Branch
Default Branch
The
default branch executes when no other conditions match, providing a fallback path.Type Safety
Type Safety
All branches receive the same input context type and must return compatible output types.
Use Cases
- User-generated content: Automatically moderate posts, comments, and uploads
- Chat moderation: Filter messages in real-time communication
- Content approval workflows: Route content through review queues
- Automated compliance: Flag content that violates policies
Related
AI Workflows
Learn about parallel execution and conditional steps
Data Processing
Explore transform and context management patterns