Skip to main content
Interactive mode provides a natural language query interface for exploring entity knowledge states, relationships, and temporal evolution.

Starting Interactive Mode

python cli.py mode=interactive
======================================================================
TEMPORAL SIMULATION INTERACTIVE QUERY INTERFACE
======================================================================

You can ask questions about entities in the temporal simulation.
Examples:
  'What did George Washington think about becoming president?'
  'How did Thomas Jefferson feel about the inauguration?'
  'What actions did Alexander Hamilton take during the ceremony?'

Type 'help' for more examples, 'exit' or 'quit' to leave.

Query: _

Query Interface

The interface parses natural language queries and synthesizes contextual responses.

Query Types

Ask about what entities know:
"What did George Washington think about becoming president?"
"What does Thomas Jefferson know about the Constitution?"
"Tell me Hamilton's understanding of the financial system"
Returns:
  • Entity knowledge state
  • Confidence scores
  • Source attributions (exposure events)

Query Processing Pipeline

1

Parse Query

LLM extracts structured intent:
QueryIntent(
    target_entity="george_washington",
    information_type="knowledge",
    context_entities=[],
    confidence=0.9,
    reasoning="Query asks about Washington's thoughts"
)
2

Retrieve Data

Fetch relevant entities, timepoints, and exposure events from database
3

Elevate Resolution (if needed)

Automatically elevates entity resolution to provide detailed responses:
  • TENSOR_ONLY → SCENE (for basic context)
  • SCENE → DIALOG (for detailed interactions)
  • DIALOG → FULL_CONTEXT (for comprehensive history)
4

Synthesize Response

LLM generates natural language response with:
  • Relevant knowledge items
  • Source attributions
  • Confidence indicators
  • Temporal context

Special Commands

help
command
Show available commands and example queries
Query: help
status
command
Display simulation statistics:
  • Total entities
  • Total timepoints
  • Latest timepoint details
  • Resolution distribution
  • Cost and token usage
Query: status
models
command
Enter model management interface:
  • List available models
  • Switch models
  • Test model connectivity
  • View model details
Query: models
exit / quit / q
command
Exit interactive mode
Query: exit

Example Session

Query: What did George Washington think about becoming president?

  Parsing query...
  Intent: knowledge about george_washington (confidence: 0.9)
  Synthesizing response...

Response:
George Washington held mixed feelings about assuming the presidency. Based on
his exposure events from the inauguration ceremony (1789-04-30):

1. **Sense of duty** (confidence: 0.95)
   Source: Witnessed public expectation at Federal Hall
   "The weight of responsibility is immense, but the nation requires steady
   leadership to unite the states."

2. **Personal reluctance** (confidence: 0.92)
   Source: Private reflections recorded during oath ceremony
   "I would have preferred to remain at Mount Vernon, but duty calls."

3. **Concerns about precedent** (confidence: 0.89)
   Source: Cabinet discussions on April 30
   "Every action I take will set precedent for future presidents. This
   requires careful consideration."

Cost so far: $0.0045

Query: status

Simulation Status:
  Entities: 5
  Timepoints: 5
  Total cost: $0.0234
  Tokens used: 1,567
  Latest timepoint: tp_005
    Event: Jefferson expresses concerns about centralized power...
  Resolution distribution:
    SCENE: 3 entities
    DIALOG: 2 entities

Query: exit

Goodbye! 👋

Lazy Resolution Elevation

Interactive mode automatically elevates entity resolution when needed:
TENSOR_ONLY - Entity stored with compressed tensors only (~200 tokens)Query arrives requiring detailed knowledge state
This implements Mechanism 5 (Lazy Resolution) - detail generated only when queried.

Query Caching

To reduce costs, identical queries are cached for 1 hour:
CACHE_TTL = timedelta(hours=1)

# Cache key: query + intent
cache_key = hash(query + target_entity + information_type)

# Check cache before LLM call
if cached_response := get_cached_response(cache_key):
    return cached_response
Cache statistics:
query_interface.get_cache_stats()
# Returns: {'total_entries': 15, 'valid_entries': 12, 'expired_entries': 3}

Cost Tracking

Each query displays incremental cost:
Cost so far: $0.0045
Typical query costs:
  • Simple knowledge query: $0.002-0.005
  • Complex relationship query: $0.005-0.010
  • Temporal evolution query: $0.010-0.020
  • Query with resolution elevation: +$0.010-0.050

Configuration

Interactive mode uses standard Hydra config:
conf/config.yaml
mode: interactive

database:
  url: sqlite:///timepoint.db

llm:
  base_url: https://openrouter.ai/api/v1
  model: meta-llama/llama-3.1-70b-instruct
  temperature: 0.1  # Low temperature for consistent parsing

Override Model

# Use different model for interactive mode
python cli.py mode=interactive llm.model=deepseek/deepseek-chat

# Adjust temperature
python cli.py mode=interactive llm.temperature=0.3

Advanced Features

Counterfactual Queries

Supports “what-if” branching scenarios:
"What if Hamilton was absent from the inauguration?"
"What would happen if Jefferson arrived early?"
"Imagine if the cabinet meeting was cancelled"
Query intent fields:
QueryIntent(
    is_counterfactual=True,
    intervention_type="entity_removal",
    intervention_target="alexander_hamilton",
    intervention_description="Hamilton absent from inauguration"
)

Multi-Entity Queries

"How did Washington, Adams, and Jefferson interact during the ceremony?"
Returns:
  • Multi-entity relationships
  • Interaction patterns
  • Social dynamics

Attribution Tracking

All responses include source attribution:
Source: Witnessed public expectation at Federal Hall
Timestamp: 1789-04-30 12:00:00
Confidence: 0.95
Timepoint: tp_001
This ensures knowledge provenance - you can trace every fact back to its exposure event.

Help Text

Available commands:
  help, h, ?     Show this help
  status         Show simulation status and statistics
  models         Manage LLM models (list, switch, test)
  exit, quit, q  Leave the interactive interface

Query examples:
  "What did George Washington think about becoming president?"
  "How did Thomas Jefferson feel during the inauguration?"
  "What actions did Alexander Hamilton take after the ceremony?"
  "Tell me about James Madison's thoughts on the new government"
  "What was John Adams' reaction to the presidential oath?"

The system will automatically:
- Parse your natural language query
- Identify relevant entities and timepoints
- Elevate resolution if needed for detailed responses
- Provide attribution showing knowledge sources
- Track query history for better future responses

Note: The system uses causal temporal simulation where entities evolve
over timepoints.

Status Output

Query: status

Simulation Status:
  Entities: 5
  Timepoints: 5
  Total cost: $0.0234
  Tokens used: 1,567
  Latest timepoint: tp_005
    Event: Jefferson expresses concerns about centralized power during...
  Resolution distribution:
    SCENE: 3 entities
    DIALOG: 2 entities

Error Handling

When queries fail, the system provides helpful feedback:
Error processing query: Entity not found: 'benjamin_franklin'
Available entities: george_washington, john_adams, thomas_jefferson,
                   alexander_hamilton, james_madison

Try again or type 'help' for guidance.

Integration with Other Modes

Interactive mode works best after training:
1

Train Entities

python cli.py mode=temporal_train training.context=founding_fathers_1789
2

Query Results

python cli.py mode=interactive
3

Evaluate Quality

python cli.py mode=evaluate

Next Steps

Training

Populate entities with knowledge

Evaluation

Validate entity quality

Resolution Engine

Learn about lazy resolution

CLI Overview

Back to CLI overview

Build docs developers (and LLMs) love