Overview
Timepoint Pro’s 19 mechanisms group into five conceptual pillars:Fidelity Management
Allocate detail where queries landMechanisms: M1, M2, M5, M6
Temporal Reasoning
Multiple notions of time and causalityMechanisms: M7, M8, M12, M14, M17
Knowledge Provenance
Track who knows what, from whom, whenMechanisms: M3, M4, M19
Entity Simulation
Generate and synthesize entity behaviorMechanisms: M9, M10, M11, M13, M15, M16
Infrastructure
Model selection, cost optimizationMechanisms: M18
The insight: These ideas are the value; the mechanisms are derivable implementations. Full specification in MECHANICS.md.
Pillar 1: Fidelity Management
M1: Heterogeneous Fidelity Graphs
Each (entity, timepoint) pair maintains independent resolution:- Resolution is mutable: queries elevate on-demand
- System maintains both compressed and full representations
- Fidelity concentrates around high-centrality entities at critical timepoints
Token Economics
Token Economics
Uniform high fidelity: 100 entities × 10 timepoints × 50k = 50M tokensHeterogeneous fidelity (power-law): ~2.5M tokens → 95% reduction
M2: Progressive Training
Entity quality exists on a continuous spectrum determined by accumulated interaction:- query_count > 3 → SCENE → GRAPH
- query_count > 5 → GRAPH → DIALOG
- query_count > 10 OR centrality > 0.8 → TRAINED
Example: Dr. Okonkwo in Castaway Colony
Example: Dr. Okonkwo in Castaway Colony
Starts at SCENE (background doctor). After 3+ xenobiology queries (“Is lichen edible?”, “Toxicity profile?”), progressively elevates to DIALOG then TRAINED. Becomes most detailed entity in biosphere scenes. Quality tracks expertise demand.
M5: Query-Driven Lazy Resolution
Resolution decisions happen at query time, not simulation time:Example: Navigator Jin Park
Example: Navigator Jin Park
M6: Timepoint Tensor Model (TTM) Compression
At TENSOR_ONLY resolution, entities are structured tensors:Example: Kepler-442b Biosphere Compression
Example: Kepler-442b Biosphere Compression
Alien ecosystem compressed as TTM tensor with
context_vector=[bioluminescence_intensity, electromagnetic_sensitivity, growth_rate], biology_vector=[toxicity_index, nutrient_profile, symbiotic_relationships].Size: ~1,600 tokens (97% compression from 50k)Query: “How does flora respond to radio equipment?”Reconstruction: electromagnetic_sensitivity dimension reconstructs relevant behavior without decompressing entire biosphereDual Tensor Architecture & Synchronization
System maintains two representations:- TTMTensor
- CognitiveTensor
Purpose: Trained, compressed storageScale: 0-1 for most valuesLocation:
entity.tensorPillar 2: Temporal Reasoning
M17: Modal Temporal Causality
Five temporal modes, each defining what “consistency” means:- FORWARD
- PORTAL
- BRANCHING
- CYCLICAL
- DIRECTORIAL
Standard causal DAG—causes precede effectsUse case: Corporate board meetings, historical timelines
M7: Causal Temporal Chains
Timepoints form explicit causal chains:This prevents anachronisms structurally, not heuristically.
M8: Vertical Timepoint Expansion
Timepoints can be expanded vertically—adding detail within a moment:M12: Counterfactual Branching
Create alternate timelines from intervention points:Example: Castaway Colony Day 7 Branches
Example: Castaway Colony Day 7 Branches
Commander Tanaka’s decision spawns 3 branches:
- Branch A (Fortify & Wait): Conservative resources, focus shelter
- Branch B (Explore & Adapt): Send teams, discover flora + caves
- Branch C (Repair & Signal): All resources to beacon
M14: Circadian Activity Patterns
Entities have activity probabilities that vary with time:Pillar 3: Knowledge Provenance
M3: Exposure Event Tracking
Knowledge acquisition is logged as exposure events:entity.knowledge_state ⊆ {e.information for e in entity.exposure_events where e.timestamp ≤ query_timestamp}
Read more: Knowledge Provenance →
M4: Constraint Enforcement
Five validators enforce consistency using conservation-law metaphors:Information Conservation (Shannon)
Information Conservation (Shannon)
Knowledge state cannot exceed exposure history
Energy Budget (Thermodynamic)
Energy Budget (Thermodynamic)
Bounded cognitive/physical energy per timepoint
Behavioral Inertia
Behavioral Inertia
Personality persists; sudden changes need justification
Biological Constraints
Biological Constraints
Physical limitations constrain behavior
Network Flow
Network Flow
Information propagates along relationship edges
M19: Knowledge Extraction Agent
LLM-based extraction replaces naive capitalization heuristics:- ✅ “Board approved $2M budget increase” (decision)
- ✅ “Prototype failed last week” (revelation)
- ❌ “Hello”, “Thanks” (greetings)
- ❌ “We’ll”, “I’ve” (contractions)
Pillar 4: Entity Simulation
M9: On-Demand Entity Generation
Queries may reference entities that don’t exist yet:Example: Castaway Colony Cargo Bay
Example: Castaway Colony Cargo Bay
Query: “What did crew find in cargo bay 3?”Result: System generates inventory entities on-demand—supply crates with plausible damage states and utility values, causally consistent with research vessel scenario
M10: Scene-Level Entity Sets
Scenes have entity types that influence individual behavior:- EnvironmentEntity
- AtmosphereEntity
- CrowdEntity
Physical space: location, capacity, lighting, weather, acoustics
M11: Dialog Synthesis
Per-character turn generation via LangGraph steering:Steering Node
Steering Node
Selects next speaker, evaluates narrative progress, injects mood shifts, can suppress or end dialog
Character Node
Character Node
Generates ONE turn using:
PersonaParams(temperature, top_p, max_tokens from entity state)FourthWallContext(two-layer: back=voice shaping, front=content)
Quality Gate Node
Quality Gate Node
Surface heuristics first (cheap), then semantic evaluation:
- Narrative advancement
- Conflict specificity
- Voice distinctiveness
Params2Persona: Entity State → LLM Parameters
Fourth Wall Context (Two-Layer)
- Back Layer (shapes HOW): true emotional state, withheld knowledge, suppressed impulses
- Front Layer (character knows WHAT): filtered knowledge, natural-language relationships
M13: Multi-Entity Synthesis
Relationships evolve and can be analyzed:M15: Entity Prospection (Extended Inner Life)
Entities model their own futures:- Episodic memory: LLM generates personality-filtered memories
- Rumination: Topics addressed lose intensity; unresolved intensify
- Withheld knowledge: Persists, feeds back into Fourth Wall
- Suppressed impulses: Feeds into future dialog tension
M16: Animistic Entity Extension
Objects, institutions, places can have agency:_filter_dialog_participants() uses thresholds:
| Entity Type | Min Level | Speaking Mode |
|---|---|---|
| human | 0 | Normal dialog |
| animal | 1 | Behavioral narration (3rd person) |
| building | 2 | Environmental narration (sensory) |
| abstract | 3 | Collective consciousness |
Pillar 5: Infrastructure
M18: Intelligent Model Selection
Capability-based model selection—different actions need different models:- 16 Action Types
- 15 Capabilities
- ENTITY_POPULATION
- DIALOG_SYNTHESIS
- TEMPORAL_REASONING
- COUNTERFACTUAL_PREDICTION
- KNOWLEDGE_VALIDATION
- SCENE_GENERATION
- RELATIONSHIP_ANALYSIS
- PROSPECTION
- ANIMISTIC_BEHAVIOR
- PORTAL_BACKWARD_REASONING
- PORTAL_PATH_SCORING
- CONFIG_GENERATION
- TENSOR_COMPRESSION
- VALIDATION
- SUMMARIZATION
- GENERAL
| Model | Context | Strengths | License |
|---|---|---|---|
| Llama 3.1 8B | 128k | Fast, cost-efficient | Llama 3.1 |
| Llama 3.1 70B | 128k | Balanced, dialog | Llama 3.1 |
| Llama 3.1 405B | 128k | Highest quality | Llama 3.1 |
| Llama 4 Scout | 512k | Multimodal, huge context | Llama 4 |
| Qwen 2.5 7B | 32k | JSON, code | Qwen |
| Qwen 2.5 72B | 128k | Structured output | Qwen |
| QwQ 32B | 32k | Math, logical reasoning | Qwen |
| DeepSeek Chat | 64k | Balanced, analytical | MIT |
| DeepSeek R1 | 64k | Deep reasoning, math | MIT |
| Mistral 7B | 32k | Fast, creative | Apache 2.0 |
| Mixtral 8x7B | 32k | Balanced | Apache 2.0 |
| Mixtral 8x22B | 64k | High quality | Apache 2.0 |
Training data filter: When
for_training_data=True, selector automatically filters to MIT/Apache-2.0 models. Llama outputs can only train Llama models per license restrictions.Mechanism Map

| ID | Name | Pillar | Key Affordance |
|---|---|---|---|
| M1 | Heterogeneous Fidelity Graphs | Fidelity | Power-law resolution distribution |
| M2 | Progressive Training | Fidelity | Quality as continuous spectrum |
| M5 | Query-Driven Lazy Resolution | Fidelity | Never pay for unused detail |
| M6 | TTM Tensor Compression | Fidelity | 97% compression with structure |
| M7 | Causal Temporal Chains | Temporal | Explicit timepoint ancestry |
| M8 | Vertical Timepoint Expansion | Temporal | Detail within moments |
| M12 | Counterfactual Branching | Temporal | Alternate timeline propagation |
| M14 | Circadian Activity Patterns | Temporal | Time-of-day behavior modulation |
| M17 | Modal Temporal Causality | Temporal | 5 causality regimes |
| M3 | Exposure Event Tracking | Provenance | Logged knowledge acquisition |
| M4 | Constraint Enforcement | Provenance | Conservation-law validation |
| M19 | Knowledge Extraction Agent | Provenance | LLM-based semantic extraction |
| M9 | On-Demand Entity Generation | Entity | Generate missing entities |
| M10 | Scene-Level Entity Sets | Entity | Environment/atmosphere/crowd |
| M11 | Dialog Synthesis | Entity | Per-character turn generation |
| M13 | Multi-Entity Synthesis | Entity | Relationship evolution tracking |
| M15 | Entity Prospection | Entity | Extended inner life |
| M16 | Animistic Entity Extension | Entity | Object/institution agency |
| M18 | Intelligent Model Selection | Infrastructure | Capability-based routing |
Quick Reference
Reduce Costs?
Reduce Costs?
Use M1 (Heterogeneous Fidelity) + M5 (Lazy Resolution) + M6 (TTM Compression)Allocate detail where queries land, compress background entities
Prevent Anachronisms?
Prevent Anachronisms?
Use M3 (Exposure Events) + M4 (Constraint Enforcement) + M7 (Causal Chains)Track knowledge provenance, validate temporal ordering
Test Decisions?
Test Decisions?
Use M12 (Counterfactual Branching) with BRANCHING mode (M17)Create alternate timelines from intervention points
Strategic Planning?
Strategic Planning?
Use PORTAL mode (M17) with M7 (Causal Chains)Backward reasoning from fixed endpoints
Rich Dialog?
Rich Dialog?
Use M11 (Dialog Synthesis) + M15 (Prospection) + M19 (Knowledge Extraction)Per-character generation with inner life and semantic knowledge tracking
Multiple Timelines?
Multiple Timelines?
Use M8 (Vertical Expansion) + M12 (Branching) + M17 (Modal Causality)Combine vertical detail and horizontal counterfactuals
Implementation Status
All 19 mechanisms are fully implemented and tested:- ✅ 142 ADPRS tests (124 unit + 18 integration)
- ✅ 21 verified templates across 5 temporal modes
- ✅ All showcase scenarios passing (including
castaway_colony_branchingfull 19-mechanism showcase) - ✅ Model selection with 12 open-source models
- ✅ TDF export format for suite interoperability
Costs: 1.00 per run. All 21 templates verified February 16, 2026.
Next Steps
SNAG vs RAG
How SNAG grounds LLMs in social graphs
Temporal Modes
5 ways to reason about causality
Fidelity Management
95% cost reduction deep dive
Knowledge Provenance
Exposure events and causal audit
Quick Start
Run your first simulation
API Reference
Complete API documentation

