Skip to main content

Not AI — An Artificial Belief System

This tool is not artificial intelligence. The Electric Monks aren’t thinking for you. You’re still doing the thinking — orchestrating, judging, choosing directions, recognizing genuine synthesis versus compromise. The Monks are believing for you. That distinction is the entire point.
From Venkatesh Rao’s “Electric Monks” framework: The central transaction cost in human cognition is context-switching cost — what Boyd calls the “transient.” The length of the transient depends on how much belief inertia you’re carrying.

The Belief Bottleneck

Thinking well about hard problems has at least three bottlenecks, and they compound:
1

Belief

Once you hold a position, you can’t simultaneously entertain its negation at full strength. You hedge, steelman weakly, unconsciously bias the comparison.
2

Research Breadth

Surveying a domain’s thinkers, history, and adjacent fields takes enormous time. Most people stop too early.
3

Structural Comparison

Even with two positions side by side, decomposing them into atomic parts and finding cross-domain connections is cognitively brutal. Most analysis stalls here.
LLMs can do all three at a scale and speed humans can’t. This skill orchestrates them to do exactly that. But belief is the most expensive bottleneck — and arguably the only one that compounds the others. If you’re carrying belief weight, you can’t do deep research on the opposing view (you’ll bias your search), and you can’t do neutral structural comparison (you’ll protect your position).

Rao’s Framework: Three Ways to Speed Up Transients

From Venkatesh Rao (after Douglas Adams), there are three ways to speed up your cognitive transients:

Richer Model Library

Maintain more mental models. Hard limit: More models means higher search costs.

Faster Switching

Switch between models faster. Hard limit: Biology — you can’t rewire your brain.

Believe Fewer Things

If machines carry the belief load, you can become a pure context-switching specialist. No ceiling.
Only the third has no ceiling. Rao wrote this framework before LLMs existed. The Electric Monks make it practical.

The F-86 Analogy: Hydraulic Controls for Belief

From John Boyd via Rao: In the Korean War, F-86 Sabres achieved a 10:1 kill ratio against MIG-15s despite similar flight capabilities. Boyd discovered the difference was hydraulic controls — the F-86 pilot could reorient faster because the plane did the mechanical work. But the transients weren’t just faster, they were better — by devoting less attention to struggling with controls, the pilot chose better maneuvers.
The Electric Monks work the same way: by carrying the belief work, they don’t just save you time, they free up cognitive capacity that goes into higher-quality structural analysis.

Fast Transients Enable Better Moves

It’s not that the F-86 pilot could pull the stick faster and execute the same maneuver. It’s that freed-up attention went to choosing better maneuvers in the first place. Same with the Electric Monks:
  • You’re not just reaching the same conclusions faster
  • You’re reaching different, better conclusions because you’re operating from the belief-free orchestrator position
  • You can see shared assumptions, internal contradictions, and cross-domain connections that are invisible when you’re inside a position

How This Works in Practice

1

The Monks Carry 100% Belief Load

Monk A believes Position A at full conviction. Monk B believes Position B at full conviction. You believe neither.
2

You Operate from the Belief-Free Position

You analyze the structure of their disagreement. Where do they actually diverge? What do they implicitly agree on? Where does each position’s own logic undermine itself?
3

Each Cycle is a Reorientation

The synthesis becomes the next round’s thesis. You commit (via monks) → shatter (via Boyd) → reconnect (via Hegel) → commit to the new thing (via monks again). Zero belief inertia.
4

Seven Cycles in an Hour = Seven Reorientations

Over time, you may internalize this reorientation capacity. The mechanical monk becomes a transitional object.

The Belief System Properties

1. Validation Checks for Elevation, Not Agreement

A defeated Monk has dropped its belief load — belief was destroyed rather than transformed. A properly elevated Monk believes more — it sees its original position as partial truth within a larger truth.
The artificial belief system should always be carrying belief. The synthesis just changes what it carries.

2. Recursion Trains Transient Speed

Each dialectical cycle is a full reorientation that would take weeks of natural thinking, compressed into minutes because you carry zero belief inertia.

3. The Branching Queue is an Orientation Library

Each deferred contradiction is a pre-positioned reorientation the user can snap into. The richer the queue, the more agile your subsequent thinking — even outside the tool — because you know the Monks are holding those positions for you.

4. Validate the User’s Dominant Mode First

If you have to defend your existing position, you’ve taken on belief weight. Monk A’s first job is to validate your instinct so thoroughly that you can release it — let the Monk carry it — and operate from the belief-free orchestrator seat.

Rao Wrote This Before LLMs

Rao’s “Electric Monks” essay predates widely available LLMs. He was theorizing about the shape of the tool that would be needed. Belief inertia is real, but it’s not the only bottleneck — and arguably not the most expensive one when you have access to AI agents that can:
  • Do deep, parallel research across multiple domains (bottleneck 2)
  • Decompose arguments into atomic parts and find cross-domain connections (bottleneck 3)
  • Carry belief load at full conviction (bottleneck 1)
What changed: LLMs made all three bottlenecks tractable at scale. The Hegelian Dialectic Skill is the operationalization of Rao’s framework with the tools that now exist.

Theory → Practice

Theoretical Insight

Belief is the bottleneck. If machines carry the belief load, humans can become pure context-switching specialists.

Operational Reality

Two AI agents believe fully committed positions. The orchestrator analyzes structure from the belief-free position. Synthesis elevates both beliefs into something larger.
The F-86 didn’t just fly faster. It flew better because the pilot’s attention was freed. The Electric Monks don’t just think faster. They free you to think better — from a position no single believer can occupy.

Build docs developers (and LLMs) love