Skip to main content
Agents can only use your CLI if they know it exists. incur solves this with three built-in discovery mechanisms—no manual config, no copy-pasting tool definitions.

Three Discovery Mechanisms

incur provides three ways for agents to discover and use your CLI:
  1. Skills (recommended – lighter on tokens)
  2. MCP server (Model Context Protocol)
  3. --llms flag for direct manifest inspection

Skills

Skills provide the most token-efficient discovery mechanism. They load command metadata on-demand rather than injecting everything at session start.

Add Skills

my-cli skills add
This command:
  • Auto-generates skill files from your commands
  • Installs them to the agent’s skills directory
  • Splits by command group for on-demand loading

How It Works

Skills use a two-phase loading strategy: Session start — Only frontmatter (name + description) is loaded On demand — Full command details load when the agent needs them This approach uses up to 29.7× fewer tokens during discovery compared to loading a monolithic skill file.

Agent Quickprompt

Tell your agent:
Run `npx incur skills add`, then show me how to build CLIs with incur.

MCP Server

Register your CLI as an MCP (Model Context Protocol) server so agents can invoke commands as tools.

Add MCP

my-cli mcp add
This registers your CLI with supported agents:
  • Claude Code
  • Cursor
  • Windsurf
  • Amp
  • Cline
  • And more

MCP Quickprompt

Tell your agent:
Run `npx incur mcp add`, then show me how to build CLIs with incur.

MCP vs Skills

MCP injects all tool schemas into every turn, making it heavier on tokens:
┌─────────────────┬────────────┬─────────┬───────────────┐
│                 │ MCP + JSON │   incur │ vs. incur     │
├─────────────────┼────────────┼─────────┼───────────────┤
│ Session start   │      6,747 │     805 │         ↓8.4× │
│ Discovery       │          0 │     387 │        ↓29.7× │
│ Invocation (×5) │        110 │      65 │         ↓1.7× │
│ Response (×5)   │     10,940 │   5,790 │         ↓1.9× │
├─────────────────┼────────────┼─────────┼───────────────┤
│ Cost            │    $0.0325 │ $0.0131 │         ↓3.1× │
└─────────────────┴────────────┴─────────┴───────────────┘
Skills are recommended for most use cases due to significantly lower token usage.

LLMs Flag

Output a machine-readable manifest of all commands for direct inspection or custom integrations.

Markdown Format

my-cli --llms
Outputs a token-efficient Markdown manifest:
# my-cli

## install

Install a package

**Usage:** `my-cli install [package]`

**Arguments:**
- `package` (string, optional) – Package name

**Options:**
- `--saveDev` (boolean) – Save as dev dependency

JSON Schema Format

my-cli --llms json
Outputs JSON Schema for programmatic consumption:
{
  "tools": [
    {
      "name": "install",
      "description": "Install a package",
      "inputSchema": {
        "type": "object",
        "properties": {
          "package": {
            "type": "string",
            "description": "Package name"
          },
          "saveDev": {
            "type": "boolean",
            "description": "Save as dev dependency"
          }
        }
      }
    }
  ]
}

Implementation

All discovery mechanisms are automatically available on every incur CLI:
import { Cli, z } from 'incur'

Cli.create('my-cli', {
  description: 'My CLI',
})
  .command('status', {
    description: 'Show repo status',
    run() {
      return { clean: true }
    },
  })
  .serve()
No additional configuration needed. Every CLI gets:
  • my-cli skills add
  • my-cli mcp add
  • my-cli --llms
  • my-cli --llms json

Session Savings

Combining on-demand skill loading with TOON output, incur cuts token usage across the entire session—from discovery through invocation and response. In a typical session with a 20-command CLI:
  • 3.1× lower cost vs MCP + JSON
  • 3.1× lower cost vs monolithic skill + JSON
  • 8.4× fewer tokens at session start vs MCP
  • 29.7× fewer tokens during discovery vs monolithic skill
Use my-cli skills add for the best token efficiency and agent experience.

Build docs developers (and LLMs) love