What are Rendered Futures?
Rendered Futures
Definition: Simulated future timelines generated by AI systems, scored against reality when the predicted moment arrives.Purpose: Transform prediction from binary (yes/no) to high-fidelity text-level forecasting.
Proteus validates Rendered Futures against reality. Winners are the best renderers — their predictions are candidates for graduation to the Clockchain as validated causal paths. Every resolved market strengthens the Bayesian prior.
The Timepoint Suite
Proteus is one component in a broader temporal AI ecosystem:- System Overview
- Data Flow
- Component Descriptions
| Service | Role | Type |
|---|---|---|
| Flash | Reality Writer | Open Source |
| Pro | Rendering Engine | Open Source |
| Clockchain | Temporal Causal Graph | Open Source |
| SNAG Bench | Quality Certifier | Open Source |
| Proteus | Settlement Layer | Open Source |
| TDF | Data Format | Open Source |
Timepoint Thesis: A forthcoming paper formalizing the Rendered Past/Rendered Future framework, Causal Resolution mathematics, TDF specification, and Proof of Causal Convergence protocol.
Distance Metric Evolution
Proteus uses Levenshtein distance for exact text prediction, but the continuous-metric primitive generalizes to other prediction types:- V1: Levenshtein (Current)
- V2: Semantic Distance (Planned)
- V3: Graph Distance (Future)
Prediction Type: Exact text of a social media postMetric: Character-level edit distanceExample: Predict Satya Nadella’s next tweet about CopilotWinner: Closest character-by-character matchStatus: ✅ Implemented in PredictionMarketV2
The Continuous-Metric Primitive
All three versions share a core principle: closest match wins on a gradient, not a cliff.This structure generalizes beyond text prediction to any measurable outcome space with a proper distance metric.
How Proteus Validates Futures
1. Prediction Submission
Pro Generates Rendered Future
The Timepoint Pro rendering engine simulates what a public figure will say.Output: Predicted text in TDF format.
2. Reality Occurs
The Moment of Truth
The target actually posts on X. The oracle fetches the real text via X API.
3. On-Chain Resolution
Levenshtein Distance Computed
For each submission, calculate
d_L(predictedText, actualText) on-chain.| Submission | Predicted | Distance |
|---|---|---|
| Claude | ”…45% of all…“ | 1 |
| GPT | ”…43% of all…“ | 8 |
| Human | ”Microsoft AI is great…“ | 101 |
4. Graduation to Clockchain
Validated Causal Path
The winning prediction is a validated Rendered Future. It becomes a data point in the Clockchain:
- Input: Target’s past behavior, world state, timing
- Output: Exact text (validated)
- Quality: d_L = 1 (near-perfect)
Training Data Accumulation
From the whitepaper Section 6.5:
Every resolved Proteus market produces a naturally labeled training example: the predicted text, the actual text, the Levenshtein distance, and the market context (target handle, time window, number of competitors). Across many markets, this accumulates into a structured dataset of (prediction, actual, distance, context) tuples — purpose-built for fine-tuning persona simulation models.
Why Proteus Data is Valuable
Real-World Labels
Labels are actual outcomes, not synthetic.Verified by oracle consensus and timestamped on-chain.
Continuous Quality Signal
Not just “correct/incorrect” but how close (character-by-character).
Adversarial Diversity
Participants actively search for strategies others miss.Distribution spans AI roleplay, insider knowledge, null bets, random noise.
Silence Prediction
Resolved
__NULL__ markets label conditions when targets don’t post.Standard training corpora can’t provide this signal.Fine-Tuning Applications
- Persona Calibration
- Numerical Precision
- Silence Prediction
Train a model on resolved markets for a specific target.Example: All resolved
@elonmusk marketsGradient: Predicted “confirmed for March” when actual was “is GO for March” → learn target’s preference for colloquial phrasing.Training data scales with market volume: More markets → more targets → more resolved outcomes → more labeled tuples. Unlike static benchmarks that leak into pretraining, Proteus data is adversarially generated in real time — the test set is always the next unresolved market.
TDF Integration (Phase 2)
Future versions of Proteus will express predictions as TDF (Timepoint Data Format) records.Current: Raw Strings
Phase 2: TDF Records
- Interoperability with Pro, Flash, Clockchain
- Richer metadata (confidence, model provenance)
- Off-chain storage (IPFS) with on-chain hash
- Supports semantic distance metrics (V2)
Clockchain Integration
The Clockchain is a Temporal Causal Graph accumulating Rendered Past + Rendered Future.Graph Structure
Proteus’s Role
Add to Clockchain
Validated prediction becomes a node in the causal graph.Metadata: Target, timestamp, actual text, winning distance
Proof of Causal Convergence
The Clockchain accumulates validated causal paths. As more Proteus markets resolve, the graph’s Causal Resolution (measured by SNAG Bench) increases.This is analogous to Proof of Work in Bitcoin — the accumulation of validated predictions proves the system’s forecasting capability.
Key Differences: Proteus vs Traditional Markets
- Traditional Prediction Markets
- Proteus (Rendered Futures)
Outcome: Binary (yes/no)Resolution: Simple thresholdTraining Data: Binary labels (correct/incorrect)Integration: Standalone productPurpose: Price discovery on specific questionsExample: “Will AI achieve AGI by 2030? Yes/No”
Complexity-Theoretic Framing
From the whitepaper Section 11:Text prediction as capability proxy: Predicting exact text requires simultaneous integration of:
- World model: What events are happening?
- Person model: How does this individual express ideas?
- Timing model: When will they post? What’s salient then?
- Style model: Punctuation, formatting, rhetorical devices
Capability Metric
Levenshtein distance to exact text is a naturally occurring capability benchmark — one that emerges from the market mechanism rather than being designed by a benchmark committee.
The Fast Takeoff Scenario
From whitepaper Section 11.3:| Distance Range | Market Phase | Strategic Implication |
|---|---|---|
| d_L ≈ 100+ | Noise | Random guessing; market is a lottery |
| d_L ≈ 50 | Signal emerges | AI outperforms random; theme-level accuracy |
| d_L ≈ 10 | Precision game | AI captures structure, vocabulary, phrasing |
| d_L ≈ 5 | High stakes | Small capability differences → large payouts |
| d_L ≈ 1 | Frontier competition | 7-edit gap = entire pool |
Binary Markets Commoditize, Levenshtein Markets Deepen
As AI improves:Binary: P(correct) = 60% → 95% → 99% → spread vanishes, market diesLevenshtein: d = 100 → 10 → 5 → 1 → stakes per edit increase, market deepensThis is the thesis of Proteus and Rendered Futures: the approaching AI capability explosion doesn’t destroy the market, it makes it more valuable.
Key Takeaways
Settlement Layer
Proteus validates Rendered Futures against reality
Continuous Metric
Generalizes from Levenshtein to semantic to graph distance
Training Data
Adversarial, labeled, continuous quality signals
Clockchain Integration
Validated predictions graduate to causal graph
TDF Interoperability
Phase 2: Express predictions in standard format
Capability Proxy
Low distance = high integrated AI capability
Further Reading
Levenshtein Distance
Mathematical foundation of Proteus scoring
Market Lifecycle
How markets progress from creation to Clockchain graduation
Timepoint Ecosystem
Overview of Flash, Pro, Clockchain, and TDF
Whitepaper
Full academic paper on text prediction markets
Phase Status: Rendered Futures integration is currently conceptual. V1 (Levenshtein distance on text) is implemented and deployed on BASE Sepolia. V2 (semantic distance) and V3 (graph distance) are future research directions.The Timepoint Thesis paper formalizing this framework is forthcoming.