Why Observability?
- Debug Faster: See exactly what your agent did and why
- Optimize Performance: Identify bottlenecks and slow operations
- Monitor Costs: Track token usage and API calls
- Improve Quality: Analyze successful vs failed interactions
- Trace Multi-Agent: Follow complex workflows across agents
Quick Start
Observability is automatically enabled when you have a Celesto API key:Automatic Tracing
Agentor automatically captures:- LLM Calls: Model name, tokens, latency, cost
- Tool Calls: Which tools were called and their results
- Agent Handoffs: Multi-agent communication flows
- Errors: Exception details and stack traces
- Timing: Duration of each operation
- Input/Output: Messages and responses
Manual Tracing Setup
For more control, explicitly enable tracing:Custom Tracing
For advanced use cases, configure tracing manually:Grouping Traces
Group related operations (conversations, sessions):Adding Metadata
Enrich traces with custom metadata:Viewing Traces
Access your traces in the Celesto dashboard:- Visit https://celesto.ai/observe
- Log in with your account
- View traces in real-time
Trace Details
Each trace shows:- Timeline: Visual representation of operations
- Spans: Individual operations (LLM calls, tool calls)
- Tokens: Input/output tokens per call
- Cost: Estimated cost per operation
- Latency: Time spent in each operation
- Errors: Any exceptions or failures
- Metadata: Custom metadata you added
Filtering Traces
Filter by:- Agent name
- Time range
- Status (success/failure)
- Group ID (session)
- Custom metadata
- Token usage
- Cost
Monitoring Patterns
Track Token Usage
Monitor Error Rates
A/B Testing
Multi-Agent Tracing
Performance Optimization
Use traces to identify bottlenecks:max_tokens limitBest Practices
# Good - trackable session
config = get_run_config(group_id=session_id)
# Less useful - isolated traces
config = get_run_config() # No group_id
# Good - rich context
metadata = {
"user_id": user_id,
"user_tier": "premium",
"feature": "research",
"version": "v2"
}
# Less useful - minimal context
metadata = {"timestamp": time.time()}
from agentor.tracer import setup_celesto_tracing
processor = setup_celesto_tracing(
endpoint="https://api.celesto.ai/traces/ingest",
token=api_key
)
try:
# Your agent code
pass
finally:
processor.force_flush() # Ensure traces are sent
processor.shutdown()
Privacy and Security
Sensitive Data
Tracing includes input/output by default. For sensitive data:Data Retention
Traces are stored according to your Celesto plan:- Free tier: 7 days
- Pro tier: 30 days
- Enterprise: Custom retention
Troubleshooting
Traces Not Appearing
Check:- API key is set correctly
- Network connectivity to Celesto
- No firewall blocking outbound requests
- Traces are flushed (for scripts)
High Latency
Tracing adds minimal overhead (less than 10ms typically). If experiencing issues:Missing Metadata
Ensure you’re usingget_run_config:
Next Steps
- Deploy agents with Celesto CLI for automatic observability
- Learn about streaming to monitor responses in real-time
- Explore agent communication tracing patterns