Overview
Baseline graph commands provide free-tier knowledge graph capabilities using extractive summaries and optional local LLM integration.
No external dependencies required - works out of the box.
Commands
baseline-graph build
Build baseline graph from threads with extractive or LLM-based summaries.
Usage
watercooler baseline-graph build [options]
Options
Threads directory. Defaults to ./watercooler or $WATERCOOLER_DIR.
Output directory for graph files. Defaults to <threads-dir>/graph/baseline. Alias: -o.
Use extractive summaries only (no LLM). Fastest, works offline.
Skip closed threads (only process OPEN threads)
Examples
Build with local LLM
watercooler baseline-graph build
Output:
Building baseline graph from /path/to/watercooler...
Mode: LLM (http://localhost:11434)
Processing threads:
✓ feature-auth.md (5 entries)
✓ api-redesign.md (8 entries)
⊘ bug-123.md (skipped - closed)
Baseline graph built: /path/to/watercooler/graph/baseline
Threads: 2
Entries: 13
Nodes: 26
Edges: 15
Extractive-only (fastest)
watercooler baseline-graph build --extractive-only
Output:
Building baseline graph from /path/to/watercooler...
Mode: extractive only (no LLM)
Processing threads:
✓ feature-auth.md (5 entries)
✓ api-redesign.md (8 entries)
✓ bug-123.md (3 entries)
Baseline graph built: /path/to/watercooler/graph/baseline
Threads: 3
Entries: 16
Nodes: 32
Edges: 18
Custom output directory
watercooler baseline-graph build --output ~/graphs/myapp
Skip closed threads
watercooler baseline-graph build --skip-closed
Output:
Building baseline graph from /path/to/watercooler...
Mode: LLM (http://localhost:11434)
Skipping closed threads
Processing threads:
✓ feature-auth.md (5 entries)
⊘ api-redesign.md (skipped - closed)
⊘ bug-123.md (skipped - closed)
Baseline graph built: /path/to/watercooler/graph/baseline
Threads: 1
Entries: 5
Nodes: 10
Edges: 4
baseline-graph stats
Show thread statistics for baseline graph.
Usage
watercooler baseline-graph stats [options]
Options
Threads directory. Defaults to ./watercooler or $WATERCOOLER_DIR.
Examples
Show statistics
watercooler baseline-graph stats
Output:
Baseline Graph Statistics:
Threads dir: /path/to/watercooler
Total threads: 5
Total entries: 28
Avg entries/thread: 5.6
Status breakdown:
OPEN: 3
CLOSED: 2
BLOCKED: 0
Stats for specific directory
watercooler baseline-graph stats --threads-dir ~/projects/myapp/watercooler
Summary Methods
- How it works: Extracts key sentences from entry text
- Pros: Fast, offline, no API costs
- Cons: Less coherent than LLM summaries
- Use when: Offline work, cost sensitive, quick analysis
watercooler baseline-graph build --extractive-only
Local LLM Summaries
- How it works: Uses local Ollama or similar LLM
- Pros: Better quality than extractive, still free
- Cons: Requires local LLM setup, slower than extractive
- Use when: Quality matters, local LLM available
Default LLM endpoint: http://localhost:11434 (Ollama)
Configure via environment:
export WATERCOOLER_LLM_BASE_URL=http://localhost:11434
watercooler baseline-graph build
Output Structure
Baseline graph creates these files:
graph/baseline/
├── manifest.json # Graph metadata
├── threads.json # Thread index
├── nodes.json # Graph nodes
├── edges.json # Graph edges
└── summaries/ # Entry summaries
├── entry-001.txt
└── entry-002.txt
manifest.json
{
"version": "1.0",
"threads_exported": 3,
"entries_exported": 16,
"nodes_written": 32,
"edges_written": 18,
"summary_method": "extractive",
"created_at": "2024-03-15T14:30:22Z"
}
nodes.json
[
{
"id": "thread:feature-auth",
"type": "thread",
"topic": "feature-auth",
"status": "OPEN",
"ball": "codex"
},
{
"id": "entry:01H...",
"type": "entry",
"thread": "feature-auth",
"title": "Design complete",
"summary": "OAuth2 with PKCE selected for auth."
}
]
edges.json
[
{
"source": "thread:feature-auth",
"target": "entry:01H...",
"type": "contains"
},
{
"source": "entry:01H...",
"target": "entry:01J...",
"type": "precedes"
}
]
Comparison: Baseline vs Memory
| Feature | Baseline Graph | Memory Graph |
|---|
| Cost | Free | API costs |
| Dependencies | None | openai, embeddings |
| Summaries | Extractive or local LLM | Cloud LLM |
| Embeddings | No | Yes |
| Semantic search | No | Yes |
| Offline | Yes (extractive mode) | No |
| Quality | Good | Excellent |
| Speed | Fast | Slower |
| Best for | Free tier, offline, quick analysis | Production, semantic search |
Workflows
Quick local analysis
# Build extractive graph
watercooler baseline-graph build --extractive-only
# View statistics
watercooler baseline-graph stats
Local LLM workflow
# Start Ollama (if not running)
ollama serve
# Pull model
ollama pull llama2
# Build graph with LLM
watercooler baseline-graph build
Focus on active work
# Only process open threads
watercooler baseline-graph build --skip-closed
Custom output location
# Export to shared location
watercooler baseline-graph build \
--output /mnt/shared/graphs/myapp \
--extractive-only
Local LLM Setup
Ollama (recommended)
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Start server
ollama serve
# Pull a model
ollama pull llama2
# Test
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Hello"
}'
# Build graph
watercooler baseline-graph build
Custom LLM endpoint
# Point to custom endpoint
export WATERCOOLER_LLM_BASE_URL=http://localhost:8080
watercooler baseline-graph build
Build Time
- Extractive: ~1 second per thread
- Local LLM: ~5-10 seconds per thread (depends on model)
Disk Usage
- Minimal: JSON files + text summaries
- Approximately 1-5 MB per 100 entries
Error Handling
LLM connection failed
⚠ LLM not available at http://localhost:11434
Falling back to extractive summaries.
Automatic fallback to extractive mode.
Thread parse error
✗ Error parsing bug-123.md: Invalid YAML frontmatter
Skipping thread.
Continues with other threads.
Output directory exists
Overwrites existing graph in output directory.
Use Cases
Offline development
# Works without internet
watercooler baseline-graph build --extractive-only
Cost-free analysis
# No API costs
watercooler baseline-graph build --extractive-only
watercooler baseline-graph stats
Local LLM experimentation
# Try different models
ollama pull llama2
watercooler baseline-graph build
ollama pull mistral
export WATERCOOLER_LLM_MODEL=mistral
watercooler baseline-graph build
CI/CD integration
# .github/workflows/graph.yml
steps:
- name: Build graph
run: |
watercooler baseline-graph build --extractive-only
watercooler baseline-graph stats