Skip to main content
The Config API provides Pydantic-based configuration schema with validation, defaults, and type safety.

Overview

Watercooler uses a hierarchical configuration structure with:
  • Type validation via Pydantic
  • Environment variable overrides
  • Sensible defaults
  • Field-level documentation

Root Configuration

WatercoolerConfig

Root configuration model containing all settings.
class WatercoolerConfig(BaseModel):
    version: int = 1
    common: CommonConfig
    mcp: McpConfig
    dashboard: DashboardConfig
    validation: ValidationConfig
    memory: MemoryConfig
    federation: FederationConfig
Usage:
from watercooler.config_schema import WatercoolerConfig

# Create config with defaults
config = WatercoolerConfig.default()

# Access nested config
print(config.mcp.default_agent)  # "Agent"
print(config.memory.backend)  # "graphiti"

Loading Configuration

from watercooler_mcp.config import get_watercooler_config

# Load from config.toml and environment
config = get_watercooler_config()

Common Configuration

CommonConfig

Shared settings for both MCP and Dashboard.
class CommonConfig(BaseModel):
    threads_pattern: str = "https://github.com/{org}/{repo}-threads.git"
    threads_suffix: str = "-threads"
    templates_dir: str = ""
Fields:
  • threads_pattern - URL pattern for threads repos with placeholders
  • threads_suffix - Suffix appended to code repo name
  • templates_dir - Path to templates (empty = use bundled)
Example:
config.common.threads_pattern = "[email protected]:{org}/{repo}-threads.git"
config.common.threads_suffix = "-watercooler"

MCP Configuration

McpConfig

MCP server configuration.
class McpConfig(BaseModel):
    # Transport
    transport: Literal["stdio", "http"] = "stdio"
    host: str = "127.0.0.1"
    port: int = 3000
    
    # Agent identity
    default_agent: str = "Agent"
    agent_tag: str = ""
    
    # Behavior
    auto_branch: bool = True
    auto_provision: bool = True
    
    # Paths
    threads_dir: str = ""
    threads_base: str = ""
    
    # Nested configs
    git: GitConfig
    sync: SyncConfig
    logging: LoggingConfig
    graph: GraphConfig
    slack: SlackConfig
    service_provision: ServiceProvisionConfig
    http: HttpConfig
    cache: CacheConfig
    hosted: HostedConfig
    daemons: DaemonsConfig
    agents: Dict[str, AgentConfig]
Example:
from watercooler.config_schema import WatercoolerConfig

config = WatercoolerConfig.default()

# Configure MCP server
config.mcp.transport = "http"
config.mcp.port = 8080
config.mcp.default_agent = "Claude Code"
config.mcp.auto_branch = True

GitConfig

Git-related MCP settings.
class GitConfig(BaseModel):
    author: str = ""
    email: str = "[email protected]"
    ssh_key: str = ""
Example:
config.mcp.git.author = "My Agent"
config.mcp.git.email = "[email protected]"
config.mcp.git.ssh_key = "~/.ssh/id_ed25519"

SyncConfig

Git sync behavior settings.
class SyncConfig(BaseModel):
    async_sync: bool = True
    batch_window: float = 5.0
    max_delay: float = 30.0
    max_batch_size: int = 50
    max_retries: int = 5
    max_backoff: float = 300.0
    interval: float = 30.0
    stale_threshold: float = 60.0
Example:
config.mcp.sync.batch_window = 10.0
config.mcp.sync.max_retries = 3
config.mcp.sync.interval = 60.0

LoggingConfig

Logging configuration.
class LoggingConfig(BaseModel):
    level: Literal["DEBUG", "INFO", "WARNING", "ERROR"] = "INFO"
    dir: str = ""
    max_bytes: int = 10485760  # 10MB
    backup_count: int = 5
    disable_file: bool = False
Example:
config.mcp.logging.level = "DEBUG"
config.mcp.logging.dir = "~/watercooler-logs"
config.mcp.logging.backup_count = 10

Graph Configuration

GraphConfig

Baseline graph configuration for summaries and embeddings.
class GraphConfig(BaseModel):
    # Summary generation
    generate_summaries: bool = False
    summarizer_api_base: str = ""
    summarizer_model: str = ""
    
    # Embedding generation
    generate_embeddings: bool = False
    embedding_api_base: str = ""
    embedding_model: str = ""
    
    # Behavior
    prefer_extractive: bool = False
    auto_detect_services: bool = True
    auto_start_services: bool = False
    
    # Arc change detection
    embedding_divergence_threshold: float = 0.6
Example:
config.mcp.graph.generate_summaries = True
config.mcp.graph.summarizer_model = "qwen3:1.7b"
config.mcp.graph.generate_embeddings = True
config.mcp.graph.embedding_model = "bge-m3"
config.mcp.graph.embedding_divergence_threshold = 0.7

ServiceProvisionConfig

Auto-provisioning for external services.
class ServiceProvisionConfig(BaseModel):
    models: bool = True
    llama_server: bool = True
Example:
config.mcp.service_provision.models = True
config.mcp.service_provision.llama_server = False

Memory Configuration

MemoryConfig

Memory backend configuration.
class MemoryConfig(BaseModel):
    enabled: bool = True
    backend: Literal["graphiti", "leanrag", "null"] = "graphiti"
    queue_enabled: bool = False
    
    # Shared service configs
    llm: LLMServiceConfig
    embedding: EmbeddingServiceConfig
    database: MemoryDatabaseConfig
    
    # Tier orchestration
    tiers: TierOrchestrationConfig
    
    # Backend-specific overrides
    graphiti: GraphitiBackendConfig
    leanrag: LeanRAGBackendConfig
Example:
config.memory.enabled = True
config.memory.backend = "graphiti"
config.memory.queue_enabled = True

LLMServiceConfig

LLM service configuration.
class LLMServiceConfig(BaseModel):
    api_base: str = ""
    model: str = ""
    timeout: float = 60.0
    max_tokens: int = 512
    context_size: int = 8192
    system_prompt: str = ""
    prompt_prefix: str = ""
    summary_prompt: str = "Summarize this thread entry..."
    thread_summary_prompt: str = "Summarize this development thread..."
    summary_example_input: str = "..."
    summary_example_output: str = "..."
Example:
config.memory.llm.api_base = "http://localhost:8080/v1"
config.memory.llm.model = "qwen3:1.7b"
config.memory.llm.max_tokens = 512
config.memory.llm.context_size = 40960

EmbeddingServiceConfig

Embedding service configuration.
class EmbeddingServiceConfig(BaseModel):
    api_base: str = "http://localhost:8080/v1"
    model: str = "bge-m3"
    dim: int = 1024
    context_size: int = 8192
    timeout: float = 60.0
    batch_size: int = 32
Example:
config.memory.embedding.api_base = "http://localhost:8081/v1"
config.memory.embedding.model = "nomic-embed-text"
config.memory.embedding.dim = 768

MemoryDatabaseConfig

Database (FalkorDB) configuration.
class MemoryDatabaseConfig(BaseModel):
    host: str = "localhost"
    port: int = 6379
    username: str = ""
    password: str = ""
Example:
config.memory.database.host = "localhost"
config.memory.database.port = 6379

Slack Configuration

SlackConfig

Slack integration configuration.
class SlackConfig(BaseModel):
    webhook_url: str = ""
    bot_token: str = ""
    app_token: str = ""
    channel_prefix: str = "wc-"
    auto_create_channels: bool = True
    default_channel: str = ""
    notify_on_say: bool = True
    notify_on_ball_flip: bool = True
    notify_on_status_change: bool = True
    notify_on_handoff: bool = True
    min_notification_interval: float = 1.0
Properties:
  • is_enabled - Check if Slack is enabled
  • is_webhook_only - Check if using webhook-only mode
  • is_bot_enabled - Check if bot API mode is enabled
Example:
config.mcp.slack.webhook_url = "https://hooks.slack.com/..."
config.mcp.slack.channel_prefix = "watercooler-"
config.mcp.slack.notify_on_say = True

if config.mcp.slack.is_enabled:
    print("Slack notifications enabled")

Validation Configuration

ValidationConfig

Protocol validation configuration.
class ValidationConfig(BaseModel):
    on_write: bool = True
    on_commit: bool = True
    fail_on_violation: bool = False
    check_branch_pairing: bool = True
    check_commit_footers: bool = True
    check_entry_format: bool = True
    check_status_values: bool = True
    
    entry: EntryValidationConfig
    commit: CommitValidationConfig
Example:
config.validation.fail_on_violation = True
config.validation.check_branch_pairing = True
config.validation.entry.require_metadata = True

Federation Configuration

FederationConfig

Federated search configuration.
class FederationConfig(BaseModel):
    enabled: bool = False
    namespaces: Dict[str, FederationNamespaceConfig]
    access: FederationAccessConfig
    scoring: FederationScoringConfig
    namespace_timeout: float = 0.4
    max_namespaces: int = 5
    max_total_timeout: float = 2.0
Example:
config.federation.enabled = True
config.federation.max_namespaces = 10
config.federation.namespace_timeout = 0.5

Agent Configuration

AgentConfig

Configuration for specific agent platforms.
class AgentConfig(BaseModel):
    name: str
    default_spec: str = "general-purpose"
Example:
config.mcp.agents["claude-code"] = AgentConfig(
    name="Claude Code",
    default_spec="implementer-code"
)

Agent Resolution

# Get agent config by platform slug
agent_config = config.get_agent_config("claude-code")
if agent_config:
    print(agent_config.name)  # "Claude Code"

# Resolve agent name with priority order
name = config.resolve_agent_name(
    agent_func="Claude Code:sonnet-4:implementer",
    env_agent=os.getenv("WATERCOOLER_AGENT"),
    platform_slug="claude-code"
)
print(name)  # Uses priority: agent_func > env > platform > default

Environment Overrides

Many configuration values can be overridden with environment variables:
# MCP settings
export WATERCOOLER_AGENT="Claude Code"
export WATERCOOLER_THREADS_SUFFIX="-threads"

# Memory settings
export LLM_API_BASE="http://localhost:8080/v1"
export LLM_MODEL="qwen3:1.7b"
export EMBEDDING_API_BASE="http://localhost:8081/v1"
export EMBEDDING_MODEL="bge-m3"

# Database settings
export FALKORDB_HOST="localhost"
export FALKORDB_PORT="6379"

# Service provision
export WATERCOOLER_AUTO_PROVISION_MODELS="true"

Validation

Pydantic provides automatic validation:
from pydantic import ValidationError

try:
    config = WatercoolerConfig(
        mcp={"port": 99999}  # Invalid port
    )
except ValidationError as e:
    print(e)
    # Validation error: port must be <= 65535

Type Safety

All config classes use type hints:
config.mcp.port = "8080"  # Type error
config.mcp.port = 8080    # OK

config.mcp.transport = "websocket"  # Type error (not in Literal)
config.mcp.transport = "http"       # OK

Build docs developers (and LLMs) love