Overview
Timepoint Pro uses variable-depth fidelity to minimize cost while preserving simulation quality. The core insight: most entities most of the time can stay at low resolution (~200 tokens). Detail expands only where queries land. This is the physics-style abstraction that makes SNAG scalable:- Coarse resolution for broad arcs
- High resolution at critical pivots
- Query-driven detail expansion
Fidelity Levels
TENSOR_ONLY (~200 tokens)
Use case: Background entities, crowd members, entities not involved in current scene
Mechanisms: M6 (Tensor Compression)
BASIC_PROFILE (~800 tokens)
Use case: Active participants in scene, dialog speakers
Mechanisms: M1 (Heterogeneous Fidelity), M6 (Tensor Compression)
FULL_CONTEXT (~2000+ tokens)
Use case: Protagonist, key decision makers, entities with complex internal state
Mechanisms: M1 (Heterogeneous Fidelity), M2 (Progressive Training), M6 (Tensor Compression), M15 (Prospection)
Fidelity Templates
Pre-configured fidelity strategies:minimal
- All entities start at TENSOR_ONLY
- No automatic upgrades
- Dialog synthesis disabled
- Minimal knowledge tracking
Use case: Rapid prototyping, convergence testing, bulk data generation
balanced
- Entities start at TENSOR_ONLY
- Dialog participants upgraded to BASIC_PROFILE
- Key decision makers upgraded to FULL_CONTEXT
- Automatic downgrade after scene
Use case: Default for most scenarios (95% of templates use this)
high_detail
- Key entities start at FULL_CONTEXT
- All dialog participants maintained at BASIC_PROFILE minimum
- Rich knowledge tracking (M3 Exposure Events)
- Extended proception state (M15)
Use case: Training data generation, showcase demos, research
Token Budget Modes
hard (Strict)
- Simulation aborts if budget exceeded
- Forces entity downgrades before generation
- Skips dialog if insufficient tokens
soft (Flexible)
- Budget is a target, not a hard limit
- Allows overruns up to 20%
- Logs warnings but continues
adaptive (Dynamic)
- Dynamically adjusts fidelity based on scene importance
- Upgrades entities at narrative pivots
- Downgrades during transitions
- Learns optimal fidelity allocation over run
Model Selection (M18)
The model selector intelligently chooses models based on action type and requirements.Action Types
Selection Preferences
Quality-first:Model Profiles
Fallback Chains
Automatic retry with model diversity:Batch Operations
Run Multiple Templates
Run all templates in a category:Convergence Testing
Repeat same template to measure stability:Variation Generation
Generate diverse outputs from same scenario:Example: 0.80**
Cost Estimation
Roughly:- Input tokens: $0.30-1.50 per 1M tokens (model dependent)
- Output tokens: $1.00-5.00 per 1M tokens
- Average run: 20,000-100,000 tokens total
Best Practices
Start Cheap, Scale Up
Use Quick Tier for Iteration
Develop using quick tier templates:Disable Unnecessary Features
Optimize Timepoint Count
Use Training-Safe Models for Data Generation
DeepSeek is cheapest unrestricted model:Cost Troubleshooting
Run Too Expensive
Check actual cost:- Set
fidelity_template: minimal - Decrease
timepoints.count - Decrease
entities.count - Set
token_budget_mode: hardwith lower budget - Disable
include_dialogs
Unexpected Token Usage
Debug token consumption:- Dialog with many turns (10+ turns = 5000+ tokens)
- FULL_CONTEXT entities (2000+ tokens each)
- Knowledge provenance tracking (M3 adds ~20% overhead)
- Prospection state (M15 adds ~30% overhead)
Budget Exceeded Errors
Error:Cost by Template Category
Quick Tier (less than $0.05)
convergence/simple
Standard Tier (0.20)
board_meeting- $0.05jefferson_dinner- $0.05hospital_crisis- $0.05detective_prospection- $0.05kami_shrine- $0.05vc_pitch_forward- $0.08vc_pitch_branching- $0.10sec_investigation- $0.08agent1_regulatory_stress- $0.08agent2_mission_failure- $0.10agent3_litigation_discovery- $0.06agent4_elk_migration- $0.10
Comprehensive Tier (1.00)
vc_pitch_roadshow- $0.20hound_shadow_directorial- $0.25mars_mission_portal- $0.40agent3_litigation_portal- $0.40castaway_colony_branching- $1.50 (pending)
Next Steps
- Learn about Model Selection (M18) for detailed model selector behavior
- Read Training Data to understand licensing considerations
- Explore Templates to configure fidelity settings

