Skip to main content

Minimum viable config

Most users only need these settings. Create ~/.watercooler/config.toml with:
# ~/.watercooler/config.toml
version = 1                       # schema version; do not modify

[mcp]
default_agent = "Claude Code"     # your MCP client name (usually auto-detected)
agent_tag = "(yourname)"          # optional: appended to agent name in thread entries
Generate an annotated version with:
watercooler config init --user

Config vs credentials

FileWhat it storesSafe to commit?
~/.watercooler/config.tomlBehavior and preferencesYes
~/.watercooler/credentials.tomlSecrets (tokens, API keys)Never
Both files are TOML. The config file is also supported at project level: .watercooler/config.toml (inside your repo, for per-project overrides).

Config commands

Initialize config from template

watercooler config init --user      # creates ~/.watercooler/config.toml
Pass --force to overwrite an existing file.

Show resolved config

View merged user + project + env vars:
watercooler config show
watercooler config show --json                    # machine-readable output
watercooler config show --sources                 # show which file each key came from
watercooler config show --project-path /path/to/repo   # check config for another project

Validate config

Check for errors or warnings:
watercooler config validate
watercooler config validate --strict    # treat warnings as errors

Key settings by category

[common] — thread location

KeyDefaultDescription
templates_dir(bundled)Custom templates directory
threads_suffix"-threads"Legacy. Suffix for a separate threads repo. Silently ignored in the default orphan-branch setup.
threads_pattern(derived)Legacy. Full URL pattern for a separate threads repo. Silently ignored unless threads_suffix is also set.

[mcp] — server and identity

KeyDefaultDescription
default_agent"Agent"Agent name shown in thread entries
agent_tag""Short tag appended to agent name, e.g. "(alice)"
threads_dir(auto)Explicit threads directory; leave empty for auto-discovery
transport"stdio"Transport mode: stdio (local) or http
auto_branchtrueAuto-create threads branches for new code branches
auto_provisiontrueAuto-create threads repos if missing

[mcp.git] — commit identity

Controls the git author for thread commits:
KeyDefaultDescription
author"" (uses agent name)Git commit author name
email"[email protected]"Git commit email
ssh_key""Path to SSH private key (empty = use default ssh-agent)
[mcp.git]
author = "Claude Code"
email = "[email protected]"
# ssh_key = "~/.ssh/id_ed25519"   # optional; omit to use ssh-agent default

[mcp.sync] — git sync behavior

KeyDefaultDescription
asynctrueEnable async git operations
batch_window5.0Seconds to batch commits before push
max_delay30.0Maximum delay before forcing push
max_batch_size50Maximum entries per batch commit
max_retries5Maximum retry attempts for failed operations
max_backoff300.0Maximum backoff delay in seconds
interval30.0Background sync interval in seconds
stale_threshold60.0Seconds before considering sync stale

[mcp.logging] — logging configuration

KeyDefaultDescription
level"INFO"Log level: DEBUG, INFO, WARNING, ERROR
dir~/.watercooler/logs/Log directory
max_bytes10485760 (10MB)Maximum log file size in bytes
backup_count5Number of backup log files to keep
disable_filefalseDisable file logging (stderr only)

[memory] — enhanced search features

Enable persistent memory and semantic search across sessions (optional):
[memory]
backend = "graphiti"   # or "leanrag" for local-only setup
enabled = true
See Memory backend below for full setup instructions.

Memory backend

Watercooler’s baseline features work with zero additional configuration. The memory backend is an optional upgrade that adds persistent memory and semantic search across sessions.

Enable memory

[memory]
backend = "graphiti"     # cloud LLM provider (OpenAI, Anthropic, etc.)
# or
backend = "leanrag"      # local-only, no external API required

Configure credentials

Credentials for LLM and embedding providers go in ~/.watercooler/credentials.toml, using a provider-named section:
[openai]
api_key = "sk-..."

# or for Anthropic:
[anthropic]
api_key = "sk-ant-..."

Configure services

The model and endpoint are set in config.toml under [memory.llm] and [memory.embedding] (see watercooler config init --user for an annotated template). Supported providers: openai, anthropic, groq, voyage, google
For a local (no-API) setup, point both api_base fields at a local llama-server or ollama endpoint.

Environment variable reference

Environment variables override all config file settings. Format: set in shell or pass to the MCP server’s env block in your client config.

Thread and agent settings

Env varTOML equivalentDefaultDescription
WATERCOOLER_AGENTmcp.default_agent"Agent"Agent name in thread entries
WATERCOOLER_AGENT_TAGmcp.agent_tag""Tag appended to agent name
WATERCOOLER_DIRmcp.threads_dir(auto)Explicit threads directory path
WATERCOOLER_THREADS_BASEmcp.threads_base(auto)Base directory for threads repos
WATERCOOLER_THREADS_PATTERNcommon.threads_pattern(derived)Full URL pattern for threads repo
WATERCOOLER_AUTO_BRANCHmcp.auto_branchtrueAuto-create threads branches
WATERCOOLER_AUTO_PROVISIONmcp.auto_provisiontrueAuto-create threads repos
WATERCOOLER_CODE_REPO(auto)Override code repo detection

Git commit identity

Env varTOML equivalentDefaultDescription
WATERCOOLER_GIT_AUTHORmcp.git.author"" (uses agent name)Git commit author name
WATERCOOLER_GIT_EMAILmcp.git.email"[email protected]"Git commit email
WATERCOOLER_GIT_SSH_KEYmcp.git.ssh_key""Path to SSH private key

Authentication

Env varTOML equivalentDefaultDescription
GITHUB_TOKENGitHub token for git operations (or GH_TOKEN)
GH_TOKENAlternative to GITHUB_TOKEN; same precedence
WATERCOOLER_AUTH_MODE"local"Auth mode for hosted deployments
WATERCOOLER_TOKEN_API_URLToken API URL (hosted mode only)
WATERCOOLER_TOKEN_API_KEYToken API key (hosted mode only)
Env varTOML equivalentDefaultDescription
WATERCOOLER_MEMORY_BACKENDmemory.backend(disabled)Memory backend: graphiti or leanrag
WATERCOOLER_MEMORY_QUEUEmemory.queue_enabledfalseEnable async memory indexing
WATERCOOLER_MEMORY_DISABLEDSet to 1 to disable memory even if configured
LLM_API_KEYmemory.llm.api_keyLLM provider API key
LLM_API_BASEmemory.llm.api_baseLLM endpoint URL
LLM_MODELmemory.llm.modelLLM model name
EMBEDDING_API_KEYmemory.embedding.api_keyEmbedding provider API key
EMBEDDING_API_BASEmemory.embedding.api_baseEmbedding endpoint URL
EMBEDDING_MODELmemory.embedding.modelEmbedding model name
EMBEDDING_DIMmemory.embedding.dimEmbedding dimension

MCP server

Env varTOML equivalentDefaultDescription
WATERCOOLER_MCP_TRANSPORTmcp.transport"stdio"Transport: stdio or http
WATERCOOLER_MCP_HOSTmcp.host"127.0.0.1"HTTP mode: bind address
WATERCOOLER_MCP_PORTmcp.port3000HTTP mode: port

Logging

Env varDefaultDescription
WATERCOOLER_LOG_LEVEL"INFO"Log level: DEBUG, INFO, WARNING, ERROR
WATERCOOLER_LOG_DIR~/.watercooler/logs/Log file directory
WATERCOOLER_LOG_DISABLE_FILEfalseSet to 1 to disable file logging

Precedence rules

Later sources override earlier ones, on a per-key basis:
  1. Built-in defaults
  2. User config: ~/.watercooler/config.toml
  3. Project config: <project>/.watercooler/config.toml
  4. Environment variables
To see the resolved value and source of each key, run:
watercooler config show --sources

Tier label glossary

LabelWhat it adds
T1 — BaselineThread graph, zero config, included with all installs. say, ack, handoff, list, search all work at T1.
T2 — Semantic memoryPersistent memory and semantic search across sessions. Requires memory backend configuration.
T3 — Hierarchical memorySummarized context and full semantic graph with community detection. Requires T2 setup plus additional resources.

Build docs developers (and LLMs) love