Skip to main content

Overview

Timepoint Pro’s 19 mechanisms group into five conceptual pillars:

Fidelity Management

Allocate detail where queries landMechanisms: M1, M2, M5, M6

Temporal Reasoning

Multiple notions of time and causalityMechanisms: M7, M8, M12, M14, M17

Knowledge Provenance

Track who knows what, from whom, whenMechanisms: M3, M4, M19

Entity Simulation

Generate and synthesize entity behaviorMechanisms: M9, M10, M11, M13, M15, M16

Infrastructure

Model selection, cost optimizationMechanisms: M18
The insight: These ideas are the value; the mechanisms are derivable implementations. Full specification in MECHANICS.md.

Pillar 1: Fidelity Management

M1: Heterogeneous Fidelity Graphs

Each (entity, timepoint) pair maintains independent resolution:
Resolution Levels (ordered):
TENSOR_ONLY < SCENE < GRAPH < DIALOG < TRAINED

Example Graph:
Timepoint(T0, "Constitutional Convention")
├── Entity("washington", resolution=TRAINED, ~50k tokens)
├── Entity("madison", resolution=DIALOG, ~10k tokens)  
├── Entity("attendee_47", resolution=TENSOR_ONLY, ~200 tokens)
└── causal_link → Timepoint(T1)
Key properties:
  • Resolution is mutable: queries elevate on-demand
  • System maintains both compressed and full representations
  • Fidelity concentrates around high-centrality entities at critical timepoints
Uniform high fidelity: 100 entities × 10 timepoints × 50k = 50M tokensHeterogeneous fidelity (power-law): ~2.5M tokens → 95% reduction
Read more: Fidelity Management →

M2: Progressive Training

Entity quality exists on a continuous spectrum determined by accumulated interaction:
EntityMetadata:
    query_count: int              # Times queried
    training_iterations: int      # LLM elaboration passes
    eigenvector_centrality: float # Graph importance (0-1)
    resolution_level: ResolutionLevel
    last_accessed: datetime
Elevation triggers:
  • query_count > 3 → SCENE → GRAPH
  • query_count > 5 → GRAPH → DIALOG
  • query_count > 10 OR centrality > 0.8 → TRAINED
Starts at SCENE (background doctor). After 3+ xenobiology queries (“Is lichen edible?”, “Toxicity profile?”), progressively elevates to DIALOG then TRAINED. Becomes most detailed entity in biosphere scenes. Quality tracks expertise demand.

M5: Query-Driven Lazy Resolution

Resolution decisions happen at query time, not simulation time:
def decide_resolution(entity, timepoint, query_history, thresholds):
    if entity.query_count > thresholds.frequent_access:
        return max(entity.resolution, DIALOG)
    if entity.eigenvector_centrality > thresholds.central_node:
        return max(entity.resolution, GRAPH)
    if timepoint.importance_score > thresholds.critical_event:
        return max(entity.resolution, SCENE)
    return TENSOR_ONLY
Key principle: Never pay for detail nobody asked about.
Days 1-6: TENSOR_ONLY (~200 tokens/timepoint), inactiveDay 7: Query about pre-crash beacon data triggers elevation to DIALOGDays 7-8: High-fidelity participation (10k tokens/timepoint)Savings: 400k tokens → 21k tokens (95% reduction)

M6: Timepoint Tensor Model (TTM) Compression

At TENSOR_ONLY resolution, entities are structured tensors:
TTMTensor:
    context_vector: np.ndarray   # Knowledge state (8 dims)
    biology_vector: np.ndarray   # Physical attributes (4 dims)
    behavior_vector: np.ndarray  # Personality/decision patterns (8 dims)

# Context vector layout:
# [0]=knowledge, [1]=valence, [2]=arousal, [3]=energy,
# [4]=confidence, [5]=patience, [6]=risk, [7]=social

# Compression ratios:
# Full entity: ~50k tokens
# TTM representation: ~1,600 tokens (97% compression)
Structural preservation: Tensors preserve enough structure for causal validation and can be re-expanded on query.
Alien ecosystem compressed as TTM tensor with context_vector=[bioluminescence_intensity, electromagnetic_sensitivity, growth_rate], biology_vector=[toxicity_index, nutrient_profile, symbiotic_relationships].Size: ~1,600 tokens (97% compression from 50k)Query: “How does flora respond to radio equipment?”Reconstruction: electromagnetic_sensitivity dimension reconstructs relevant behavior without decompressing entire biosphere

Dual Tensor Architecture & Synchronization

System maintains two representations:
Purpose: Trained, compressed storageScale: 0-1 for most valuesLocation: entity.tensor
Bidirectional sync:
Entity Load → TTM→Cog Sync → Dialog → Updates → Cog→TTM Sync → Persist
              (pretraining)                      (backprop)
Read more: TTM Tensors →

Pillar 2: Temporal Reasoning

M17: Modal Temporal Causality

Five temporal modes, each defining what “consistency” means:
Standard causal DAG—causes precede effectsUse case: Corporate board meetings, historical timelines
Read more: Temporal Modes →

M7: Causal Temporal Chains

Timepoints form explicit causal chains:
Timepoint:
    timepoint_id: str
    timestamp: datetime
    causal_parent: Optional[str]  # Explicit link to causing timepoint
    event_description: str
    entities_present: List[str]
    importance_score: float
Validation constraint: Entity at timepoint T can only reference information from timepoints T’ where a causal path exists from T’ → T.
This prevents anachronisms structurally, not heuristically.

M8: Vertical Timepoint Expansion

Timepoints can be expanded vertically—adding detail within a moment:
"Board Meeting" timepoint expands into:
├── Arrival & small talk
├── Opening remarks (CEO)
├── Financial report (CFO)
├── Key debate (product strategy)
├── Decision & vote
└── Aftermath discussions

All causally linked but temporally simultaneous

M12: Counterfactual Branching

Create alternate timelines from intervention points:
def create_counterfactual_branch(parent_timeline, intervention_point, intervention):
    branch = Timeline(parent_id=parent_timeline.id, branch_point=intervention_point)
    
    # Copy timepoints before intervention
    for tp in parent_timeline.get_timepoints_before(intervention_point):
        branch.add_timepoint(tp.deep_copy())
    
    # Apply intervention
    branch_tp = parent_timeline.get_timepoint(intervention_point).deep_copy()
    apply_intervention(branch_tp, intervention)
    branch.add_timepoint(branch_tp)
    
    # Propagate causal effects forward
    propagate_causal_effects(branch, intervention_point)
    
    return branch
Commander Tanaka’s decision spawns 3 branches:
  • Branch A (Fortify & Wait): Conservative resources, focus shelter
  • Branch B (Explore & Adapt): Send teams, discover flora + caves
  • Branch C (Repair & Signal): All resources to beacon
Each branch internally consistent: Branch B can’t use cave from Branch A’s timeline
Read more: Branching Mode →

M14: Circadian Activity Patterns

Entities have activity probabilities that vary with time:
def get_activity_probability(hour: int, activity: str) -> float:
    probability_map = {
        "sleep": lambda h: 0.95 if 0 <= h < 6 else 0.05,
        "work": lambda h: 0.7 if 9 <= h < 17 else 0.1,
        "social": lambda h: 0.6 if 18 <= h < 23 else 0.2,
    }
    return probability_map.get(activity, lambda h: 0.0)(hour)
Constrains entity behavior plausibly without explicit scheduling.

Pillar 3: Knowledge Provenance

M3: Exposure Event Tracking

Knowledge acquisition is logged as exposure events:
ExposureEvent:
    entity_id: str
    event_type: EventType  # witnessed, learned, told, experienced
    information: str       # The knowledge item
    source: Optional[str]  # Another entity or external source
    timestamp: datetime
    confidence: float      # 0.0-1.0
    timepoint_id: str
Validation constraint: entity.knowledge_state ⊆ {e.information for e in entity.exposure_events where e.timestamp ≤ query_timestamp}
Iron law: An entity cannot know something without a recorded exposure event.
Read more: Knowledge Provenance →

M4: Constraint Enforcement

Five validators enforce consistency using conservation-law metaphors:
Knowledge state cannot exceed exposure history
violations = entity.knowledge_state - set(exposure_events)
Bounded cognitive/physical energy per timepoint
total_cost = sum(action.energy_cost)
valid = total_cost <= entity.energy_budget
Personality persists; sudden changes need justification
delta = norm(new_traits - old_traits)
max_change = 0.1 * timespan.days
Physical limitations constrain behavior
if action.requires_mobility and entity.mobility < 0.3:
    raise ValidationError
Information propagates along relationship edges
path = nx.shortest_path(graph, source, target)
if not path: raise ValidationError

M19: Knowledge Extraction Agent

LLM-based extraction replaces naive capitalization heuristics:
KnowledgeItem:
    content: str                # Complete semantic unit
    speaker: str                # Entity who communicated
    listeners: List[str]        # Entities who received it
    category: str               # fact, decision, opinion, plan, revelation
    confidence: float           # 0.0-1.0
    causal_relevance: float     # 0.0-1.0
What gets extracted:
  • ✅ “Board approved $2M budget increase” (decision)
  • ✅ “Prototype failed last week” (revelation)
  • ❌ “Hello”, “Thanks” (greetings)
  • ❌ “We’ll”, “I’ve” (contractions)
RAG-aware: Agent receives causal context to avoid redundant extraction and recognize novel information. Read more: M19 Knowledge Extraction →

Pillar 4: Entity Simulation

M9: On-Demand Entity Generation

Queries may reference entities that don’t exist yet:
def detect_entity_gap(query, existing_entities):
    referenced = extract_entity_names(query)
    missing = referenced - set(existing_entities)
    return missing

def generate_entity_on_demand(name, context):
    # LLM generates plausible entity given scenario context
Query: “What did crew find in cargo bay 3?”Result: System generates inventory entities on-demand—supply crates with plausible damage states and utility values, causally consistent with research vessel scenario

M10: Scene-Level Entity Sets

Scenes have entity types that influence individual behavior:
Physical space: location, capacity, lighting, weather, acoustics
Individual entity behavior synthesized in context of scene-level state.

M11: Dialog Synthesis

Per-character turn generation via LangGraph steering:
steering_node → character_node → quality_gate_node → conditional
                                                       ├→ steering (continue)
                                                       ├→ END (complete)
                                                       └→ character (retry)
Selects next speaker, evaluates narrative progress, injects mood shifts, can suppress or end dialog
Generates ONE turn using:
  • PersonaParams (temperature, top_p, max_tokens from entity state)
  • FourthWallContext (two-layer: back=voice shaping, front=content)
Surface heuristics first (cheap), then semantic evaluation:
  • Narrative advancement
  • Conflict specificity
  • Voice distinctiveness

Params2Persona: Entity State → LLM Parameters

| Parameter | Source | Effect |
|-----------|--------|--------|
| temperature | arousal × energy | High arousal → varied (~1.1) |
| top_p | arousal | Agitated → focused vocabulary |
| max_tokens | turn × energy | Later turns + low energy → shorter |
| frequency_penalty | behavior_vector[5] | Vocabulary richness |
| presence_penalty | behavior_vector[6] | Novelty seeking |

Fourth Wall Context (Two-Layer)

  • Back Layer (shapes HOW): true emotional state, withheld knowledge, suppressed impulses
  • Front Layer (character knows WHAT): filtered knowledge, natural-language relationships
Knowledge limits scale with resolution: TENSOR=5 items, SCENE=8, GRAPH=12, DIALOG=16, TRAINED=20.

M13: Multi-Entity Synthesis

Relationships evolve and can be analyzed:
RelationshipMetrics:
    shared_knowledge: Set[str]
    belief_alignment: float
    interaction_count: int
    trust_level: float
    power_dynamic: float

def analyze_relationship_evolution(entity_a, entity_b, timespan):
    # Track metric changes over timepoints
    
def detect_contradictions(entities, timepoint):
    # Find belief conflicts

M15: Entity Prospection (Extended Inner Life)

Entities model their own futures:
ProspectiveState:
    entity_id: str
    forecast_horizon: timedelta
    expectations: List[Expectation]
    contingency_plans: Dict[str, List[Action]]
    anxiety_level: float
    # Extended proception (post-dialog)
    withheld_knowledge: List[Dict]       # Chose not to say
    suppressed_impulses: List[Dict]      # Wanted but held back
    episodic_memory: List[Dict]          # Personality-filtered memories
    rumination_topics: List[Dict]        # Recurring concerns
Post-dialog proception updates inner life:
  1. Episodic memory: LLM generates personality-filtered memories
  2. Rumination: Topics addressed lose intensity; unresolved intensify
  3. Withheld knowledge: Persists, feeds back into Fourth Wall
  4. Suppressed impulses: Feeds into future dialog tension

M16: Animistic Entity Extension

Objects, institutions, places can have agency:
class AnimismLevel:
    0: Only humans
    1: Humans + animals/buildings
    2: All objects/organisms  
    3: Abstract concepts
    4: Adaptive (AnyEntity)
    5: Spiritual (KamiEntity)
    6: AI agents
Conference room “wants” productive meetings; codebase “resists” certain changes. Dialog enforcement: _filter_dialog_participants() uses thresholds:
Entity TypeMin LevelSpeaking Mode
human0Normal dialog
animal1Behavioral narration (3rd person)
building2Environmental narration (sensory)
abstract3Collective consciousness

Pillar 5: Infrastructure

M18: Intelligent Model Selection

Capability-based model selection—different actions need different models:
  • ENTITY_POPULATION
  • DIALOG_SYNTHESIS
  • TEMPORAL_REASONING
  • COUNTERFACTUAL_PREDICTION
  • KNOWLEDGE_VALIDATION
  • SCENE_GENERATION
  • RELATIONSHIP_ANALYSIS
  • PROSPECTION
  • ANIMISTIC_BEHAVIOR
  • PORTAL_BACKWARD_REASONING
  • PORTAL_PATH_SCORING
  • CONFIG_GENERATION
  • TENSOR_COMPRESSION
  • VALIDATION
  • SUMMARIZATION
  • GENERAL
Model Registry (open-source only, licenses permit commercial synthetic data):
ModelContextStrengthsLicense
Llama 3.1 8B128kFast, cost-efficientLlama 3.1
Llama 3.1 70B128kBalanced, dialogLlama 3.1
Llama 3.1 405B128kHighest qualityLlama 3.1
Llama 4 Scout512kMultimodal, huge contextLlama 4
Qwen 2.5 7B32kJSON, codeQwen
Qwen 2.5 72B128kStructured outputQwen
QwQ 32B32kMath, logical reasoningQwen
DeepSeek Chat64kBalanced, analyticalMIT
DeepSeek R164kDeep reasoning, mathMIT
Mistral 7B32kFast, creativeApache 2.0
Mixtral 8x7B32kBalancedApache 2.0
Mixtral 8x22B64kHigh qualityApache 2.0
Automatic selection:
action = ActionType.DIALOG_SYNTHESIS
model = selector.select_model(action, context_size=5000)
# → Returns: meta-llama/llama-3.1-70b-instruct (dialog strength)

action = ActionType.PORTAL_PATH_SCORING  
model = selector.select_model(action, context_size=15000)
# → Returns: deepseek/deepseek-r1 (reasoning + MIT license)
Training data filter: When for_training_data=True, selector automatically filters to MIT/Apache-2.0 models. Llama outputs can only train Llama models per license restrictions.

Mechanism Map

Visual map of 19 mechanisms
IDNamePillarKey Affordance
M1Heterogeneous Fidelity GraphsFidelityPower-law resolution distribution
M2Progressive TrainingFidelityQuality as continuous spectrum
M5Query-Driven Lazy ResolutionFidelityNever pay for unused detail
M6TTM Tensor CompressionFidelity97% compression with structure
M7Causal Temporal ChainsTemporalExplicit timepoint ancestry
M8Vertical Timepoint ExpansionTemporalDetail within moments
M12Counterfactual BranchingTemporalAlternate timeline propagation
M14Circadian Activity PatternsTemporalTime-of-day behavior modulation
M17Modal Temporal CausalityTemporal5 causality regimes
M3Exposure Event TrackingProvenanceLogged knowledge acquisition
M4Constraint EnforcementProvenanceConservation-law validation
M19Knowledge Extraction AgentProvenanceLLM-based semantic extraction
M9On-Demand Entity GenerationEntityGenerate missing entities
M10Scene-Level Entity SetsEntityEnvironment/atmosphere/crowd
M11Dialog SynthesisEntityPer-character turn generation
M13Multi-Entity SynthesisEntityRelationship evolution tracking
M15Entity ProspectionEntityExtended inner life
M16Animistic Entity ExtensionEntityObject/institution agency
M18Intelligent Model SelectionInfrastructureCapability-based routing

Quick Reference

Use M1 (Heterogeneous Fidelity) + M5 (Lazy Resolution) + M6 (TTM Compression)Allocate detail where queries land, compress background entities
Use M3 (Exposure Events) + M4 (Constraint Enforcement) + M7 (Causal Chains)Track knowledge provenance, validate temporal ordering
Use M12 (Counterfactual Branching) with BRANCHING mode (M17)Create alternate timelines from intervention points
Use PORTAL mode (M17) with M7 (Causal Chains)Backward reasoning from fixed endpoints
Use M11 (Dialog Synthesis) + M15 (Prospection) + M19 (Knowledge Extraction)Per-character generation with inner life and semantic knowledge tracking
Use M8 (Vertical Expansion) + M12 (Branching) + M17 (Modal Causality)Combine vertical detail and horizontal counterfactuals

Implementation Status

All 19 mechanisms are fully implemented and tested:
  • 142 ADPRS tests (124 unit + 18 integration)
  • 21 verified templates across 5 temporal modes
  • All showcase scenarios passing (including castaway_colony_branching full 19-mechanism showcase)
  • Model selection with 12 open-source models
  • TDF export format for suite interoperability
Costs: 0.150.15–1.00 per run. All 21 templates verified February 16, 2026.

Next Steps

SNAG vs RAG

How SNAG grounds LLMs in social graphs

Temporal Modes

5 ways to reason about causality

Fidelity Management

95% cost reduction deep dive

Knowledge Provenance

Exposure events and causal audit

Quick Start

Run your first simulation

API Reference

Complete API documentation

Build docs developers (and LLMs) love