Skip to main content
The Pi backend connects Nuggets to Pi, an open-source local AI agent that communicates over JSONL on stdin/stdout. Pi runs as a persistent subprocess per conversation, resuming its session across restarts and accepting follow-up messages without losing context. The Pi backend is the default and provides the deepest Nuggets integration: native memory tools, graph note management, scheduling, and memory reflection are all available as structured tool calls.

Configuration

# .env
AGENT_BACKEND=pi
AGENT_PROVIDER=anthropic   # anthropic | openai | openai-codex
AGENT_MODEL=               # Leave empty for the provider default

Provider credentials

The Pi backend authenticates using a provider API key or Pi’s stored OAuth login.
# Anthropic (default provider)
ANTHROPIC_API_KEY=sk-ant-...

# OpenAI
OPENAI_API_KEY=sk-...
If you set AGENT_PROVIDER=openai but have no OPENAI_API_KEY and no stored OpenAI login, Pi automatically checks for a stored openai-codex OAuth credential and remaps the provider to openai-codex. You can also run pi interactively and use /login to authenticate through the Pi UI.

Memory tools

The Pi backend loads two extensions from .pi/extensions/: nuggets.ts and proactive.ts. These register the following tools:

nuggets.ts tools

ToolDescription
nuggetsPersistent memory tool. Actions: remember, recall, forget, list. Stores and retrieves FHRR-backed facts across sessions.
createNoteCreates a rich note in the Nuggets graph with title, content, tags, scope, and stability.
addLinkCreates a bidirectional link between two existing notes with a descriptive reason.
editNoteRewrites the content, title, or tags of an existing graph note by ID.
searchNotesSearches the Zettelkasten memory graph using text overlap and FHRR similarity.
The nuggets extension also injects existing facts and top-ranked notes into the system prompt before each agent turn, so the agent starts with relevant context automatically.

proactive.ts tools

ToolDescription
scheduleCreates, deletes, or lists cron-based schedules. Use one_shot=true for one-time reminders. Cron format: minute hour day-of-month month day-of-week.
reflectAndCleanMemoryRuns a safe maintenance pass over the note graph: rewrites stale notes, merges near-duplicates, improves tags and links, archives low-value entries. Inspects at most limit notes (default 10).

Session pool

The gateway maintains one Pi subprocess per active conversation. You can tune the pool with two environment variables:
VariableDefaultDescription
MAX_PI_PROCESSES5Maximum number of simultaneous Pi subprocesses. When the pool is full, the least-recently-active session is evicted.
PI_IDLE_TIMEOUT_MS300000 (5 min)Time in milliseconds after the last message before an idle Pi process is stopped. The session file is preserved so the next message resumes where you left off.
# .env
MAX_PI_PROCESSES=5
PI_IDLE_TIMEOUT_MS=300000
Pi resumes an existing session automatically when the provider and model match the previous session. If you change AGENT_PROVIDER or AGENT_MODEL, Pi starts a fresh session rather than continuing an incompatible one.

Skills

How skills are loaded

When the gateway starts, it discovers skills in this priority order:
1

AGENT_SKILL_PATHS

Any paths listed in AGENT_SKILL_PATHS are loaded first. Separate multiple entries with commas or newlines.
AGENT_SKILL_PATHS=/abs/path/to/skill-dir,/another/skill.md
2

Registry skills (skills/)

Nuggets looks for a skills/ directory in the project root. Each subdirectory can contain a skill.json metadata file and a SKILL.md instructions file.
3

Pi-style fallback skills (.pi/skills/)

For compatibility with plain Pi skill directories, Nuggets also loads any skills found in .pi/skills/. These are treated as simple markdown skills with no skill.json metadata.
Name collisions are resolved in the order above — the first skill with a given name wins. Duplicate real paths are silently skipped.

How Pi receives skills

Each enabled skill is passed to the Pi subprocess as a --skill <path> argument. Pi reads the SKILL.md file directly and includes the instructions in its context. Skills with adapters.pi.enabled set to false in their skill.json are excluded.

Skill management in chat

You can manage sticky skills directly in your conversation:
CommandEffect
/skills or /skill listLists all available skills with their scope and active status.
/skill use <name>Activates a skill for this conversation (sticky scope).
/skill remove <name>Deactivates a skill for this conversation.
/skill clearRemoves all sticky skills from this conversation.
Skills with scope: sticky in their skill.json automatically activate when the message text matches a configured trigger word. One-shot skills activate for a single message and then deactivate.

SKILL.md format

A Pi-compatible skill file uses YAML frontmatter for metadata and a markdown body for instructions:
---
name: reviewer
description: Code review assistant that checks for correctness, style, and edge cases.
---

You are a careful code reviewer. When reviewing code:
- Check for logic errors and edge cases
- Flag inconsistent style
- Suggest specific improvements rather than vague feedback
Registry skills also include a skill.json alongside the SKILL.md:
{
  "name": "reviewer",
  "description": "Code review assistant that checks for correctness, style, and edge cases.",
  "scope": "sticky",
  "triggers": ["review", "check this"],
  "adapters": {
    "pi": { "enabled": true },
    "codex": { "enabled": true },
    "local": { "enabled": false }
  }
}
Keep SKILL.md frontmatter in sync with skill.json. Pi reads the frontmatter directly when loading skills, and a mismatch generates a warning at startup.

Build docs developers (and LLMs) love