Skip to main content
All configuration lives in a .env file at the project root. Run npm run setup to generate this file with an interactive wizard, or create it manually using .env.example as a template.
You must configure at least one messaging channel (Telegram, WhatsApp, or Discord). The gateway will refuse to start if no channel is configured.

Messaging channels

WhatsApp support uses the Baileys library. You do not need a business account.
VariableDescription
GATEWAY_ALLOWLISTYour WhatsApp JID, for example [email protected]. Only messages from this JID are processed.
GATEWAY_ALLOWLIST=[email protected]
You can find your JID by sending a message to the bot and checking the gateway logs. It appears in the format <number>@s.whatsapp.net.
Create a Telegram bot using @BotFather and copy the token it provides.
VariableDescription
TELEGRAM_BOT_TOKENThe bot token from @BotFather, in the format 123456:ABC-DEF....
TELEGRAM_ALLOWLISTYour Telegram chat ID (numeric). Only messages from this ID are processed.
TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
TELEGRAM_ALLOWLIST=123456789
Send /start to @userinfobot on Telegram to find your chat ID.
Discord support is zero-dependency and uses Node.js 22+ built-in WebSocket and fetch APIs.
Discord requires Node.js 22 or later. Earlier versions do not have the built-in WebSocket support that Nuggets uses.
Create a Discord bot in the Discord Developer Portal and copy the bot token.
VariableDescriptionDefault
DISCORD_BOT_TOKENYour Discord bot token.
DISCORD_ALLOWED_USER_IDSComma-separated list of Discord user IDs allowed to talk to the bot.
DISCORD_REQUIRE_MENTIONWhen true, the bot only responds in servers when directly @mentioned.true
DISCORD_BOT_TOKEN=your-discord-bot-token
DISCORD_ALLOWED_USER_IDS=123456789012345678,987654321098765432
DISCORD_REQUIRE_MENTION=true

Agent backend

Set AGENT_BACKEND to one of pi, codex, or local.
AGENT_BACKEND=pi
The Pi backend uses the Pi extensions in .pi/extensions/. It exposes fact tools, graph tools, scheduling, and memory reflection to the agent.
VariableDescription
AGENT_BACKENDSet to pi.
AGENT_PROVIDERPi provider name: anthropic, openai, openai-codex, or another provider Pi supports.
AGENT_MODELModel ID to pass to Pi. Leave empty to use Pi’s default for the provider.
PI_PROVIDERLegacy alias for AGENT_PROVIDER. Safe to keep in sync.
PI_MODELLegacy alias for AGENT_MODEL. Safe to keep in sync.
ANTHROPIC_API_KEYRequired when AGENT_PROVIDER=anthropic. Must start with sk-ant-.
OPENAI_API_KEYRequired when AGENT_PROVIDER=openai.
AGENT_BACKEND=pi
AGENT_PROVIDER=anthropic
AGENT_MODEL=
PI_PROVIDER=anthropic
PI_MODEL=
ANTHROPIC_API_KEY=sk-ant-...
Anthropic Max plan does not work with Pi because third-party OAuth was blocked in January 2026. You need an API key from console.anthropic.com.
Skills are loaded from the skills/ registry, compatible fallback skills from .pi/skills/, and any extra paths listed in AGENT_SKILL_PATHS.
The Codex backend uses codex exec and codex exec resume. It prompts Codex to use Nuggets recall-first behavior and exposes the skills catalog so Codex can read SKILL.md files on demand.
VariableDescriptionDefault
AGENT_BACKENDSet to codex.
AGENT_MODELCodex model ID, or a local Ollama model ID when using OSS mode.
CODEX_USE_OSSSet to true to run Codex against a local Ollama server.false
CODEX_LOCAL_PROVIDERLocal provider name when CODEX_USE_OSS=true (e.g. ollama).
CODEX_FULL_AUTOPass --full-auto to Codex for unattended operation.true
AGENT_BACKEND=codex
AGENT_MODEL=
CODEX_USE_OSS=false
CODEX_LOCAL_PROVIDER=
CODEX_FULL_AUTO=true
The local backend talks directly to any OpenAI-compatible server, such as Ollama or MLX. It is conversational only — no local tool use. Active skill contents are inlined into the system prompt.
VariableDescriptionDefault
AGENT_BACKENDSet to local.
LOCAL_MODEL_PROVIDERProvider name: ollama or mlx.
LOCAL_MODEL_BASE_URLBase URL for the OpenAI-compatible API.http://127.0.0.1:11434/v1 (Ollama) or http://127.0.0.1:8080/v1 (MLX)
LOCAL_MODEL_API_KEYAPI key if your local server requires one. Leave empty otherwise.
AGENT_MODELModel ID served by the local backend.
AGENT_BACKEND=local
LOCAL_MODEL_PROVIDER=ollama
LOCAL_MODEL_BASE_URL=http://127.0.0.1:11434/v1
LOCAL_MODEL_API_KEY=
AGENT_MODEL=llama3

Session pool

The session pool controls how many backend processes run concurrently and how long idle sessions are kept alive.
VariableDescriptionDefault
PI_IDLE_TIMEOUT_MSHow long (in milliseconds) an idle session is kept before being shut down.300000 (5 minutes)
MAX_PI_PROCESSESMaximum number of concurrent backend sessions.5
PI_IDLE_TIMEOUT_MS=300000
MAX_PI_PROCESSES=5

Proactive system

The proactive system sends heartbeat follow-ups, evaluates user-created cron reminders, and triggers the silent daily memory reflection pass.
VariableDescriptionDefault
HEARTBEAT_INTERVAL_MSHow often (in milliseconds) the heartbeat fires to check if the assistant should message you.1800000 (30 minutes)
QUIET_HOURS_STARTHour of day (0–23) when quiet hours begin. The assistant will not send proactive messages after this hour.22
QUIET_HOURS_ENDHour of day (0–23) when quiet hours end.8
CRON_EVAL_INTERVAL_MSHow often (in milliseconds) the cron scheduler checks for due reminders.60000 (1 minute)
HEARTBEAT_INTERVAL_MS=1800000
QUIET_HOURS_START=22
QUIET_HOURS_END=8
CRON_EVAL_INTERVAL_MS=60000
The daily memory reflection pass runs around 9:00 AM, which falls inside the default waking hours window. If the scheduled time is missed, the heartbeat triggers it as a fallback during waking hours.

Extra skill paths

To load skills from directories outside the built-in skills/ registry, set AGENT_SKILL_PATHS to a comma-separated list of absolute paths:
AGENT_SKILL_PATHS=/abs/path/to/my-skills,/another/skill-dir

Build docs developers (and LLMs) love