Overview
TheOrchestratorAgent transforms natural language event descriptions into complete scene specifications ready for simulation. It handles entity rosters, timepoint sequences, relationship graphs, and initial knowledge seeding.
Module: orchestrator.py
Architecture:
Core Components
OrchestratorAgent
Top-level coordinator for scene-to-simulation compilation. Signature:event_description: Natural language like “simulate the constitutional convention”context: Optional context (temporal_mode, max_entities, max_timepoints, etc.)save_to_db: Whether to save entities/timepoints to database
specification: CompleteSceneSpecificationentities: List of populatedEntityobjectstimepoints: List ofTimepointobjectsgraph: NetworkX relationship graphexposure_events: Initial knowledge exposure eventstemporal_agent: ConfiguredTemporalAgentfor the scene
SceneParser
Parses natural language into structured scene specification. Signature:- Single-pass: For scenarios with under 40 entities and under 80 timepoints
- Chunked generation: For large scenarios (multi-pass hierarchical generation)
KnowledgeSeeder
Seeds initial entity knowledge states from scene specification. Signature:ExposureEvent records
Example:
RelationshipExtractor
Builds social/spatial relationship graph from entity specifications. Signature:- Nodes: entity_ids with metadata (type, role, description)
- Edges: Relationships with types (ally, rival, mentor) and weights
- Co-presence edges: Entities present at same timepoints
ResolutionAssigner
Assigns resolution levels to entities based on roles and centrality. Signature:- Dictionary mapping entity_id to
ResolutionLevel - Estimated cost (USD) for the simulation
| Role | Centrality | Resolution |
|---|---|---|
| primary | over 0.5 | TRAINED |
| primary | under 0.5 | DIALOG |
| secondary | over 0.3 | DIALOG |
| secondary | under 0.3 | GRAPH |
| background | any | SCENE |
| environment | any | TENSOR_ONLY |
Data Schemas
SceneSpecification
EntityRosterItem
TimepointSpec
Model Selection
Automatic model selection based on scenario size: Standard scenarios (under 50k estimated tokens):- Uses Llama 4 Scout (327K context, 42K output limit)
- 2x safety margin for token allocation
- Uses Llama 405B (100K output limit)
- 1.5x safety margin for token allocation
Error Handling
Timeout errors:- API request timed out
- Solutions: Retry, reduce scale, use faster model
- Response truncated or malformed
- Solutions: Reduce scale, check logs, avoid MAX mode
- Schema mismatch or missing fields
- Solutions: Check logs, retry, report if consistent
Configuration Options
Temporal Modes:forward: Standard causality (default)directorial: Narrative focus with dramatic structurebranching: Counterfactual what-if scenarioscyclical: Time loops and prophecyportal: Backward inference from endpoint
Integration with Workflows
Feed to Entity Training:Best Practices
- Use chunked generation for >40 entities or >80 timepoints
- Provide temporal_mode for mode-specific affordances
- Load predefined profiles for key characters
- Set token budgets for cost control
- Check exposure events for knowledge provenance
- Validate graph structure before simulation
- Monitor cost estimates before large runs
Related
- LLM Client - LLM integration
- Storage - Database persistence
- Workflows - Entity training workflows
- Schemas - Data models

