The Core Challenge
Generating and synthesizing entity behavior at the appropriate fidelity level, from on-demand creation to detailed dialog with differentiated voices. Key principle: Entities range from background elements (TENSOR_ONLY) to fully realized characters (TRAINED) based on narrative importance and query patterns.M9: On-Demand Entity Generation
Queries may reference entities that don’t exist yet. The system detects this and generates plausible entities on demand.Detection
Castaway Colony Example
Query: “What did the crew find in cargo bay 3?” Action: Generate inventory entities on demand:- Supply crates with plausible damage states
- Utility values
- Contents consistent with research vessel
- Injury states
- Knowledge profiles
- Skill sets consistent with research vessel crew
M10: Scene-Level Entity Sets
Scenes have their own entity types that influence individual behavior.Three Scene Entity Types
1. EnvironmentEntity
Physical space properties.schemas.py:200-212
2. AtmosphereEntity
Aggregated emotional and social atmosphere.schemas.py:215-225
3. CrowdEntity
Collective behavior and composition.schemas.py:227-240
Context Shapes Behavior
Individual entity behavior is synthesized in context of scene-level state:- A heated argument affects the atmosphere
- Which affects how other entities behave
- In a specific environment
M11: Dialog Synthesis
Dialog generation uses per-character turn generation: each character produces dialog turns via independent LLM calls with persona-derived generation parameters, coordinated by a LangGraph steering agent.Architecture: Three LangGraph Nodes
1. Steering Node
Selects next speaker, evaluates narrative progress, injects mood shifts. Capabilities:- Speaker selection using back-layer contexts and proception states
- Mood shift injection
- Speaker suppression
- Dialog termination decisions
2. Character Node
Generates ONE dialog turn for the selected speaker. Uses:- Character-specific
PersonaParams(temperature, top_p, max_tokens) FourthWallContext(two-layer context separation)- Entity state (emotional, physical, knowledge)
3. Quality Gate Node
Two-stage evaluation: Stage 1: Surface heuristics (cheap pre-filter):- Banned openers
- Round-robin speaker distribution
- Turn length coefficient of variation
- Consensus ratio
- Length spread
- Narrative advancement
- Conflict specificity
- Voice distinctiveness
Dialog Turn Structure
schemas.py:338-348
Params2Persona: Entity State → LLM Parameters
Each character’s LLM call uses generation parameters derived from their current cognitive state. Fromsynth/params2persona.py:
| Parameter | Source | Effect |
|---|---|---|
temperature | arousal × energy | High arousal → varied output (~1.1); low energy → constrained (~0.4) |
top_p | arousal | Agitated characters focus vocabulary (lower top_p) |
max_tokens | turn position × energy | Later turns + low energy → shorter responses |
frequency_penalty | behavior_vector[5] | Vocabulary richness index |
presence_penalty | behavior_vector[6] | Novelty seeking index |
Fourth Wall Context (Two-Layer Separation)
Each character receives a structured two-layer context. Fromworkflows/dialog_context.py:
Back Layer (shapes voice, NOT expressed in dialog)
- True emotional state
- Withheld knowledge
- Suppressed impulses
- Anxiety level
- ADPRS band/phi
- Steering directives
Front Layer (character’s actual knowledge)
- Knowledge items (filtered by resolution level and causal ancestry)
- Natural-language relationship descriptions
- Presented emotional state
- Physical state
- Scene context
Knowledge Limits by Resolution
| Resolution | Knowledge Items |
|---|---|
| TENSOR_ONLY | 5 items |
| SCENE | 8 items |
| GRAPH | 12 items |
| DIALOG | 16 items |
| TRAINED | 20 items |
PORTAL Knowledge Stripping
In PORTAL mode, characters only know things from timepoints causally upstream of their current position.Voice Differentiation Pipeline
4-stage transformation:Entity Type Filtering (Animism-Aware)
Non-human entities (animals, buildings, environments, abstracts) are filtered from dialog participation based on the template’sanimism_level setting.
Entities that pass the threshold receive per-type speaking modes:
| Entity Type | Speaking Mode |
|---|---|
| Animals | Behavioral narration (third-person actions, no human grammar) |
| Buildings/environments | Environmental narration (sensory descriptions felt by occupants) |
| Abstracts | Collective consciousness (emergent sentiment, atmospheric shift) |
Tensor Synchronization
Before dialog synthesis: TTMTensor → CognitiveTensor sync ensures trained emotional values are used. After dialog:- Emotional Impact Analysis: Dialog content analyzed for emotional keywords
- State Persistence: Updated emotional states written to
entity_metadata["cognitive_tensor"] - Backprop to Tensor: Changes synced back to TTMTensor context_vector
Temporal Freshness (Beat Avoidance)
Dialog synthesis acceptsprior_dialog_beats—a rolling list of speaker+content summaries from previous dialogs in the same run.
These beats are listed in the prompt with explicit “Do NOT repeat” instruction, forcing the LLM to advance the narrative rather than recycling the same beats across every timepoint.
Cross-dialog semantic evaluation reinforces this.
Dialog Data Structure
schemas.py:362-369
M13: Multi-Entity Synthesis
Relationships evolve and can be analyzed across entities.Relationship Metrics
Belief Alignment Tracking
As entities interact across timepoints:- Shared knowledge grows or diverges
- Trust levels evolve based on consistency
- Power dynamics shift based on decisions and outcomes
M15: Entity Prospection (Extended Inner Life)
Entities model their own futures, and those models influence present behavior.Prospective State
- Planning
- Anxiety
- Anticipatory behavior
Post-Dialog Proception
After every dialog,trigger_post_dialog_proception() updates each participant’s inner life:
1. Episodic Memory
generate_episodic_memory: LLM generates personality-filtered memories of the conversation. Each entity remembers differently based on what was personally relevant.
2. Rumination
update_rumination_topics: Recurring concerns tracked across dialogs.
- Topics addressed in dialog lose intensity
- Unresolved topics intensify
- New concerns identified from dialog content
3. Withheld Knowledge
What the character knew but chose not to say (from steering agent decisions). Persists across dialogs and feeds back into the Fourth Wall back layer.4. Suppressed Impulses
What the character wanted to do but held back (from steering agent and social norms). Feeds into future dialog tension.5. Knowledge Generation
generate_knowledge_from_proception: Expectations and high-intensity rumination topics generate M3 exposure events, making inner life accessible in future dialogs.
M16: Animistic Entity Extension
Objects, institutions, and places can have agency.Animism Levels
Dialog Enforcement
M16 connects to M11 (Dialog Synthesis) through_filter_dialog_participants().
Entity type → minimum animism_level threshold:
| Entity Type | Threshold | Speaking Mode |
|---|---|---|
| human, person, character | 0 | Normal dialog |
| animal, creature | 1 | Behavioral narration (third-person actions) |
| building, object, environment, location, vehicle | 2 | Environmental narration (sensory descriptions) |
| abstract, concept, force | 3 | Collective consciousness (emergent sentiment) |
animism_level=0 (default), only humans participate in dialog. Higher levels include non-human entities with appropriate speaking modes injected into the prompt.
This prevents non-human entities from speaking as humans while still allowing their presence in the narrative.
Implementation
From template configuration:animism_level=2:
- Commander Tanaka: Normal dialog
- Meridian Ship: Environmental narration (“The ship groans under stress…”)
- Alien Biosphere: Environmental narration (“The flora responds with bioluminescence…”)
Performance Characteristics
Dialog Generation Cost
Per-character turn generation:- Input: ~1,000 tokens (persona params + fourth wall context + history)
- Output: ~200 tokens per turn
- Models: Llama 70B, Qwen 72B, DeepSeek Chat
- Cost per dialog: ~$0.02 (5-10 turns)
Scene Entity Storage
SQLite with scene_id indexes:- EnvironmentEntity: ~200 bytes per scene
- AtmosphereEntity: ~150 bytes per scene
- CrowdEntity: ~300 bytes per scene
- Total: ~650 bytes per scene
Next Steps
Infrastructure
M18 model selection and routing
Overview
Back to mechanisms overview

