Skip to main content
The routing engine in Claw Code maps a natural-language prompt to the commands and tools most likely to handle it. It tokenizes the prompt and scores each entry in the mirrored inventories, then selects one command and one tool first before filling the remaining slots by score.

How it works

PortRuntime.route_prompt() drives the entire flow:
  1. The prompt is split into lowercase tokens (slashes and hyphens become spaces).
  2. Each token is checked against three fields for every registered command and tool: name, source_hint, and responsibility.
  3. Each field hit increments the score by 1.
  4. Commands and tools are ranked independently, highest score first.
  5. The selector guarantees one command and one tool appear in the results first, then fills the remaining slots from the combined leftover list sorted by score.

Scoring in detail

# src/runtime.py — PortRuntime._score()
haystacks = [module.name.lower(), module.source_hint.lower(), module.responsibility.lower()]
score = 0
for token in tokens:
    if any(token in haystack for haystack in haystacks):
        score += 1
A token that appears in all three fields still only adds 1 to the score. Scoring rewards breadth of term coverage across the prompt, not repetition within a single field.

CLI: route

python3 -m src.main route "<prompt>" [--limit N]
Each match is printed as a tab-separated line:
kind\tname\tscore\tsource_hint
Example:
python3 -m src.main route "run bash command" --limit 5
command	bash	2	commands/BashCommand.ts
tool	bash_tool	2	tools/BashTool.ts
command	execute	1	commands/ExecuteCommand.ts
tool	run_script	1	tools/ScriptTool.ts
If no tokens match any entry, the output is simply No mirrored command/tool matches found. and the process exits with status 0.

--limit N

Controls the maximum number of matches returned (default: 5). The first two slots are always reserved for the top command and the top tool (if any). The remainder are filled from the combined leftover pool ranked by score descending.
python3 -m src.main route "edit file" --limit 10

Command vs. tool matches

Commands

Entries sourced from PORTED_COMMANDS — high-level operations like /bash, /edit, /memory. They map to user-facing slash-commands in the Claude Code surface.

Tools

Entries sourced from PORTED_TOOLS — lower-level execution primitives like bash_tool, read_file, write_file. They are invoked programmatically during agent turns.
The routing output marks each match with its kind field so callers can tell them apart at a glance.
Any tool whose name contains bash is automatically added to the permission-denial list by _infer_permission_denials(). The match still appears in routing output, but the tool is flagged as gated in the Python port.

CLI: bootstrap

bootstrap runs a full session from a single prompt: it routes, executes shims, streams events, and persists a session file.
python3 -m src.main bootstrap "<prompt>" [--limit N]
1

Build context

build_port_context() is called to capture workspace state (Python file count, archive availability).
2

Run workspace setup

run_setup(trusted=True) collects Python version, platform, test command, and startup steps.
3

Route the prompt

PortRuntime.route_prompt() produces the ranked match list.
4

Execute shims

The execution registry fires command and tool shims for every matched entry that has a registered executor.
5

Stream and submit

stream_submit_message() emits structured events (message_start, command_match, tool_match, permission_denial, message_delta, message_stop), then submit_message() records the turn.
6

Persist session

engine.persist_session() flushes the transcript and writes a .json file under .port_sessions/.
Output is a Markdown report covering context, setup, routing, execution, stream events, and the turn result.

CLI: turn-loop

turn-loop runs the same routing logic over multiple successive turns.
python3 -m src.main turn-loop "<prompt>" [--max-turns N] [--structured-output]
FlagDefaultDescription
--max-turns N3Maximum turns to execute
--structured-outputoffEmit JSON-formatted turn output instead of plain text
Each turn prints:
## Turn 1
Prompt: <original prompt>
Matched commands: <names>
Matched tools: <names>
Permission denials: 0
stop_reason=completed

## Turn 2
Prompt: <original prompt> [turn 2]
...
The loop stops early if stop_reason is anything other than completed (e.g. max_turns_reached or max_budget_reached).
Use --structured-output to get JSON-formatted turn summaries suitable for piping into other tools.

Structured output example

python3 -m src.main turn-loop "read a file" --max-turns 2 --structured-output
## Turn 1
{
  "summary": [
    "Prompt: read a file",
    "Matched commands: ...",
    "Matched tools: ...",
    "Permission denials: 0"
  ],
  "session_id": "a3f9..."
}
stop_reason=completed

Build docs developers (and LLMs) love