Agent pipeline anatomy
A typical agent pipeline consists of three core components:Key principles
- Annotated outputs: Use
Annotated[Type, "artifact_name"]to track all outputs as versioned artifacts - Structured results: Return dictionaries or Pydantic models, not just strings
- Error handling: Wrap agent calls in try-except blocks with status tracking
- Metadata capture: Log execution details (latency, tokens, costs) for analysis
Deployment patterns
Local execution
Run pipelines locally for development and testing:HTTP deployment
Deploy as a production HTTP service:Docker configuration
Package agents with their dependencies:Multi-agent orchestration
Routing pattern
Route queries to specialized agents:CrewAI integration
Orchestrate agent crews:LangGraph workflows
Build stateful agent workflows:Hybrid architectures
Combine traditional ML with LLM agents:Error handling
Robust error handling for production:Observability integration
Track agent performance with Langfuse:Best practices
- Use artifacts: Store all agent outputs as versioned artifacts with
Annotated - Capture metadata: Log latency, tokens, costs, and confidence scores
- Handle errors gracefully: Return status fields and fallback responses
- Enable caching carefully: Set
enable_cache=Falsefor non-deterministic agents - Structure outputs: Use Pydantic models or dicts, not raw strings
- Deploy with Docker: Package dependencies with
DockerSettings - Monitor production: Integrate observability tools like Langfuse
- Test systematically: Build evaluation pipelines to compare architectures
Next steps
Agent frameworks
Integration guides for 12+ agent frameworks
Agent evaluation
Build systematic evaluation pipelines
Deploying agents
Complete deployment example with web UI
Agent comparison
Compare multiple agent architectures
