Skip to main content

What is Portal Mode?

Portal mode implements backward temporal reasoning: you specify a known endpoint (the “portal”) and a known origin, then the system discovers plausible paths that connect them by working backward through time. Think of Portal mode as reverse causality: “Given that X happened in 2031, what chain of decisions in 2026-2030 made X inevitable?”
Core insight: Portal mode is for root cause analysis, disaster postmortems, and forensic timelines—scenarios where you know the outcome and need to understand how it came to be.

When to Use Portal Mode

Use Portal mode when:
  • You know the outcome - A disaster occurred, a verdict was reached, a mission failed
  • Root cause analysis - Trace backward to find decision chains that led to the outcome
  • Forensic investigation - Reconstruct events leading to a known result
  • Strategic postmortems - “How did we lose this deal?” “Why did this product fail?”
  • Regulatory analysis - Trace institutional failure back to policy/staffing decisions

Perfect for

  • Disaster investigations (Mars mission failure)
  • Litigation verdict analysis ($47M judgment—how?)
  • Product launch failure postmortems
  • Security breach forensics
  • Institutional failure analysis

Not ideal for

  • Open-ended exploration (use Forward/Branching)
  • Multiple endings (use Branching)
  • No clear endpoint (use Forward)
  • Narrative storytelling (use Directorial)

How Portal Mode Works

1

Define Portal & Origin

Specify the endpoint (portal) and the starting point (origin):
{
  "temporal": {
    "mode": "portal",
    "portal_description": "Ares III Mars mission loses contact during orbital insertion in March 2031. Cascading systems failures in life support and communications.",
    "portal_year": 2031,
    "origin_year": 2026,
    "backward_steps": 10
  }
}
2

Generate Backward Steps

Starting from the portal, the system generates antecedent states at each step:
  • What could have happened at T-1 to lead to the portal state?
  • Generate N candidates per step (default: 3)
  • Use LLM + hybrid scoring to rank plausibility
3

Score & Filter Paths

Each backward path is scored using:
  • LLM plausibility (0.3 weight)
  • Historical precedent (0.2 weight)
  • Causal necessity (0.3 weight)
  • Entity capability (0.2 weight)
Paths below coherence_threshold (default: 0.7) are pruned.
4

Forward Validation

To prevent hallucination, Portal mode forward-simulates each backward path:
  • Does the path make sense when read forward?
  • Can entities actually perform these actions?
  • Are knowledge dependencies satisfied?
5

Identify Pivot Points

The system detects critical decision moments where paths diverge:
  • “If Sarah had escalated the anomaly here, the mission would have delayed”
  • These become pivot points for analysis

Architecture

Portal mode is implemented in workflows/portal_strategy.py:
class PortalStrategy:
    """
    Backward simulation strategy for portal-anchored scenarios.
    
    Attributes:
        config: TemporalConfig with mode=PORTAL
        llm: LLM client for state generation and scoring
        store: GraphStore for persistence
        entity_roster: Optional dict of entity definitions from template
    """
    
    def run(self) -> list[PortalPath]:
        """
        Execute portal-anchored backward simulation.
        
        Returns:
            List of PortalPath objects, ranked by coherence score
        
        Process:
        1. Generate portal endpoint state from description
        2. Determine exploration strategy (adaptive/oscillating/random)
        3. Generate backward paths (N paths, M steps each)
        4. Validate forward coherence (can we get from origin to portal?)
        5. Rank paths by hybrid scoring
        6. Detect pivot points using path divergence analysis
        7. Return top K ranked paths with explanations
        """

Key Data Structures

@dataclass
class PortalState:
    """A state at a specific point in the backward simulation"""
    year: int
    month: int
    description: str
    entities: list[Entity]
    world_state: dict[str, Any]
    plausibility_score: float = 0.0
    parent_state: Optional["PortalState"] = None  # T+1
    children_states: list["PortalState"] = field(default_factory=list)  # T-1
    resolution_level: ResolutionLevel = None

@dataclass
class PortalPath:
    """Complete path from origin to portal"""
    path_id: str
    states: list[PortalState]  # Ordered origin→portal
    coherence_score: float
    pivot_points: list[int]  # Indices of critical decisions
    explanation: str
    validation_details: dict[str, Any]

Configuration

{
  "temporal": {
    "mode": "portal",
    "portal_description": "The endpoint state description",
    "portal_year": 2031,
    "origin_year": 2026,
    "backward_steps": 10,
    "exploration_mode": "adaptive",
    "candidate_antecedents_per_step": 3,
    "path_count": 5,
    "coherence_threshold": 0.7,
    "use_simulation_judging": true,
    "simulation_forward_steps": 1,
    "judge_model": "meta-llama/llama-3.1-405b-instruct"
  }
}
ParameterTypeDefaultDescription
modestringrequiredMust be "portal"
portal_descriptionstringrequiredDescription of the endpoint state
portal_yearintrequiredYear of the portal (endpoint)
origin_yearintrequiredYear of the origin (start)
backward_stepsint10Number of backward steps to generate
exploration_modestring"adaptive""reverse_chronological", "oscillating", "random", "adaptive"
oscillation_complexity_thresholdint10If steps > threshold, use oscillating
candidate_antecedents_per_stepint3Candidate prior states per step
path_countint5Number of complete paths to return
coherence_thresholdfloat0.7Minimum score for path validation
llm_scoring_weightfloat0.3Weight for LLM plausibility
historical_precedent_weightfloat0.2Weight for historical similarity
causal_necessity_weightfloat0.3Weight for causal linkage
entity_capability_weightfloat0.2Weight for entity capability
use_simulation_judgingbooltrueEnable forward validation with judge model
simulation_forward_stepsint1Steps to forward-simulate for validation
judge_modelstring"llama-3.1-405b"Model for judging path coherence

Template Examples

Example 1: Ares III Mars Mission Portal

From showcase/mars_mission_portal.json:
{
  "scenario_description": "PORTAL backward reasoning from a failed crewed Mars mission in 2031 to its origins in 2026. The Ares III crew of 4 astronauts launched successfully but lost contact during Mars orbital insertion.",
  "temporal": {
    "mode": "portal",
    "portal_description": "Ares III crewed Mars mission loses contact during orbital insertion in March 2031. Last telemetry shows cascading systems failures in life support and communications. Trace backward to understand how this disaster was built, decision by decision.",
    "portal_year": 2031,
    "origin_year": 2026,
    "backward_steps": 10,
    "exploration_mode": "adaptive",
    "candidate_antecedents_per_step": 3,
    "path_count": 5,
    "coherence_threshold": 0.7,
    "use_simulation_judging": true,
    "judge_model": "meta-llama/llama-3.1-405b-instruct",
    "fidelity_planning_mode": "hybrid",
    "token_budget": 200000.0
  },
  "metadata": {
    "entity_roster": {
      "sarah_okafor": {
        "role": "Mission Commander. Experienced but politically pressured.",
        "initial_knowledge": ["mission_parameters", "crew_capabilities"]
      },
      "raj_mehta": {
        "role": "Flight Engineer. Brilliant but conflict-averse.",
        "initial_knowledge": ["flight_systems_status", "anomaly_detection"]
      },
      "lin_zhang": {
        "role": "Systems Engineer. Detected anomalies but was overruled.",
        "initial_knowledge": ["alss_design", "oxygen_generator_failures"]
      },
      "thomas_webb": {
        "role": "Mission Director. Prioritized schedule over safety.",
        "initial_knowledge": ["budget_constraints", "schedule_milestones"]
      }
    }
  }
}
Cost: $0.40 | Duration: ~15min | Entities: 4 | Steps: 10 Pivot points discovered:
  1. March 2026: Initial life support design decision (simplified system)
  2. August 2027: Lin Zhang’s anomaly report marked “reviewed” without action
  3. January 2029: Budget cut forces crew size reduction
  4. November 2030: Final pre-launch review overrides safety concerns

Example 2: Litigation Verdict Portal

From persona/agent3_litigation_portal.json:
{
  "scenario_description": "PORTAL backward reasoning from $47M trade secret verdict to the chain of decisions that made it inevitable. Working backward from jury verdict through discovery, depositions, document retention failures, and initial IP protection gaps.",
  "temporal": {
    "mode": "portal",
    "portal_description": "$47 million jury verdict for plaintiff in trade secret misappropriation case. Jury found defendant willfully misappropriated 14 proprietary algorithms and customer data. Verdict includes $32M compensatory damages and $15M punitive damages.",
    "portal_year": 2025,
    "origin_year": 2020,
    "backward_steps": 12,
    "path_count": 5,
    "coherence_threshold": 0.75
  }
}
Cost: $0.40 | Duration: ~15min | Use case: Legal postmortem

Exploration Modes

Portal mode supports multiple exploration strategies:
Standard backward stepping: 100 → 99 → 98 → … → 1
  • Simplest approach
  • Good for linear cause-effect chains
  • Default for backward_steps <= 10

Hybrid Scoring System

Portal paths are scored using a weighted hybrid approach:
def _score_path(self, path: PortalPath) -> float:
    """
    Composite scoring:
    - LLM plausibility (30%): Does this make narrative sense?
    - Historical precedent (20%): Has something similar happened?
    - Causal necessity (30%): Must A → B given C?
    - Entity capability (20%): Can entity X perform action Y?
    """
    llm_score = self._llm_plausibility(path)
    historical_score = self._historical_similarity(path)
    causal_score = self._causal_necessity(path)
    capability_score = self._entity_capability_check(path)
    
    return (
        llm_score * self.config.llm_scoring_weight +
        historical_score * self.config.historical_precedent_weight +
        causal_score * self.config.causal_necessity_weight +
        capability_score * self.config.entity_capability_weight
    )
You can tune these weights in your template configuration.

Pivot Point Detection

Portal mode automatically detects critical decision moments where paths diverge:
def _detect_pivot_points(self, path: PortalPath, divergence_analysis: dict) -> list[int]:
    """
    Identify states where decisions created irreversible consequences.
    
    A pivot point is a state where:
    1. Multiple alternative antecedents were plausible
    2. The chosen antecedent had high causal weight
    3. Alternate paths would have led to different outcomes
    """
Output example:
{
  "path_id": "portal_path_a8f9",
  "pivot_points": [3, 7, 9],
  "pivot_explanations": [
    "State 3 (Aug 2027): Lin Zhang's anomaly report was marked 'reviewed' without escalation. If escalated, mission would have delayed 6 months for ALSS redesign.",
    "State 7 (Jan 2029): Budget cut forced crew reduction from 6 to 4. Eliminated redundancy in life support operations.",
    "State 9 (Nov 2030): Pre-launch review overrode safety concerns to meet launch window. Final point of no return."
  ]
}

Best Practices

The portal description anchors the entire backward search. Be concrete:Good:
“Ares III loses contact during orbital insertion on March 15, 2031 at 14:23 UTC. Last telemetry shows O2 generator failure cascading to comms blackout. All 4 crew presumed lost.”
Bad:
“The Mars mission failed.”
Portal mode works best when you pre-define key entities:
{
  "metadata": {
    "entity_roster": {
      "lin_zhang": {
        "role": "Systems Engineer who detected ALSS anomalies",
        "initial_knowledge": ["oxygen_generator_test_failures"],
        "personality_traits": ["precise", "frustrated", "data-driven"]
      }
    }
  }
}
This prevents entity hallucination and grounds the narrative in specific people.
For high-stakes analysis (legal, disaster investigation), enable forward validation:
{
  "temporal": {
    "use_simulation_judging": true,
    "simulation_forward_steps": 2,
    "judge_model": "meta-llama/llama-3.1-405b-instruct"
  }
}
This uses a large judge model to forward-simulate each path and verify coherence.
  • Exploratory postmortems: 0.65 (permissive, more paths)
  • Standard analysis: 0.70 (balanced)
  • Legal/regulatory: 0.75 (strict, fewer but higher-quality paths)
Set path_count: 5 or higher to explore multiple failure modes:
  • Path 1: Schedule pressure overrode safety concerns
  • Path 2: Budget cuts eliminated redundancy
  • Path 3: Personnel conflict prevented escalation
  • Path 4: Technical debt accumulated unnoticed
  • Path 5: Regulatory oversight gaps

Cost Estimates

Quick

0.080.08 - 0.153-4 entities5 backward steps3-5 min

Standard

0.250.25 - 0.504-6 entities8-10 steps10-15 min

Comprehensive

0.400.40 - 1.006-8 entities10-15 steps15-25 min
Portal mode is more expensive than Forward because it explores multiple candidate antecedents per step and runs forward validation.

Running Portal Mode

# Run a portal template
./run.sh run mars_mission_portal

# Quick mode (fewer steps, lower cost)
./run.sh run mars_mission_portal --portal-quick

# With simulation judging enabled
./run.sh run mars_mission_portal --portal-simjudged-quick
Portal mode commonly pairs with:
  • M3 (Exposure Events) - Track when entities learned critical information
  • M7 (Causal Chains) - Validate backward→forward causality
  • M11 (Dialog Synthesis) - Generate conversations at pivot points
  • M13 (Relationship Evolution) - Track trust erosion over time
  • M17 (Modal Causality) - Portal-specific validation rules

Next Steps

Build docs developers (and LLMs) love