The Insight
Not all entities and moments deserve equal detail. Fidelity should concentrate around high-centrality entities at critical timepoints—like a map that renders at higher resolution only where you zoom. Key principle: Resolution is heterogeneous and mutable. Queries elevate resolution (lazy loading), disuse allows compression back down.M1: Heterogeneous Fidelity Graphs
Each (entity, timepoint) pair maintains independent resolution.Resolution Levels
schemas.py:11-17
Example Structure
Token Economics
Without heterogeneous fidelity:- 100 entities × 10 timepoints at uniform high fidelity
- Cost: ~50M tokens
- Per query: ~$500
- Same scenario with dynamic resolution
- Cost: ~2.5M tokens
- 95% reduction
- Per query: ~$25
Real Implementation
schemas.py:138-161
Castaway Colony Example
In the same scene:- Commander Tanaka: TRAINED (heavily queried for command decisions)
- Engineer Sharma: DIALOG (repair assessments)
- Crashed Meridian: TENSOR (background life support tracking)
- Navigator Park: TENSOR_ONLY (inactive until queried about pre-crash data)
M2: Progressive Training
Entity quality exists on a continuous spectrum determined by accumulated interaction, not binary cached/uncached state.Metadata Structure
Quality Accumulation
Progression path:Castaway Colony Example
Dr. Okonkwo starts at SCENE resolution—a background doctor. As the crew discovers alien flora, xenobiology queries accumulate:- “Is this lichen edible?”
- “What’s the toxicity profile?”
- “Is the bioluminescence harmful?”
M5: Query-Driven Lazy Resolution
Resolution decisions happen at query time, not simulation time.Decision Logic
Key Principle
We never pay for detail nobody asked about. This is the core of cost reduction: detail is generated lazily, only when queries require it.Castaway Colony Example
Navigator Jin Park starts at TENSOR_ONLY—injured, inactive, consuming minimal tokens. When Branch C (Repair & Signal) needs pre-crash orbital data to locate the emergency beacon, a query about navigation logs triggers lazy elevation to DIALOG. Park reveals the hemisphere landing error, which cascades to:- Vasquez: recalibrate weather models
- Tanaka: explains terrain mismatch
Query History Tracking
schemas.py:294-304
M6: TTM Tensor Compression
At TENSOR_ONLY resolution, entities are represented as structured tensors.Tensor Structure
schemas.py:84-100
Context Vector Layout
Compression Ratios
| Representation | Token Count | Compression |
|---|---|---|
| Full entity | ~50,000 tokens | Baseline |
| TTM tensor | ~1,600 tokens | 97% |
Detailed Compression by Vector
- Context tensor: 1000 dims → 8 dims (99.2%)
- Biology tensor: 50 dims → 4 dims (92%)
- Behavior tensor: 100 dims → 8 dims (92%)
Castaway Colony Example
The Kepler-442b biosphere is compressed as a TTM tensor:electromagnetic_sensitivity dimension reconstructs the relevant behavior without decompressing the entire biosphere.
Dual Tensor Architecture & Synchronization
The system maintains two representations of cognitive/emotional state:Two Tensor Types
- TTMTensor (
entity.tensor): Trained, compressed tensor with emotional values (0-1 scale) - CognitiveTensor (
entity.entity_metadata["cognitive_tensor"]): Runtime state used during dialog synthesis (-1 to 1 scale for valence, 0-100 for energy)
The Sync Problem
Without synchronization:- Entities start dialog with default CognitiveTensor values (valence=0.0, arousal=0.0)
- This ignores their trained TTMTensor state
- Emotional changes from dialog are lost when entities are reloaded
The Solution: Bidirectional Sync
Scale Conversions
Implementation
Inworkflows/dialog_synthesis.py:
_sync_ttm_to_cognitive(): Called before dialog, copies trained tensor values to runtime state_sync_cognitive_to_ttm(): Called after dialog, writes emotional changes back to tensor
Resolution Token Budgets
Fromworkflows/temporal_agent.py:27-34:
Performance Validation
Validation complexity: O(n) for n validators using set operations and vector norms. From production runs:- Typical simulation: 100 entities, 10 timepoints
- With uniform fidelity: 50M tokens (~$500)
- With M1+M2+M5+M6: 250k tokens (~$2.50)
- 200x cost reduction
Next Steps
Temporal Reasoning
Five temporal modes and causal chains
Knowledge Provenance
Who knows what, from whom, when

