Skip to main content
The Add Ollama Tool skill adds a stdio-based MCP server that exposes local Ollama models as tools for the container agent. Claude remains the orchestrator but can offload work to local models for cheaper/faster tasks.

What It Does

The Add Ollama Tool skill:
  • Adds Ollama MCP server to agent-runner
  • Exposes tools to list and run local Ollama models
  • Enables Claude to delegate tasks to local models
  • Provides notification watcher for macOS

Prerequisites

  • NanoClaw base installation complete
  • Ollama installed on host machine
  • At least one Ollama model pulled (e.g., gemma3:1b, llama3.2)

How to Apply

1

Install Ollama

If not already installed:
  1. Download from https://ollama.com/download
  2. Install and start Ollama
  3. Pull a model:
    ollama pull gemma3:1b    # Small, fast (1GB)
    ollama pull llama3.2     # Good general purpose (2GB)
    ollama pull qwen3-coder:30b  # Best for code (18GB)
    
2

Invoke the skill

Run /add-ollama-tool in your NanoClaw context.
3

Apply code changes

The skill runs npx tsx scripts/apply-skill.ts .claude/skills/add-ollama-tool which:
  • Adds container/agent-runner/src/ollama-mcp-stdio.ts
  • Adds scripts/ollama-watch.sh (notification watcher)
  • Merges Ollama MCP config into agent-runner
  • Merges log surfacing into container-runner
4

Copy to per-group agent-runner

for dir in data/sessions/*/agent-runner-src; do
  cp container/agent-runner/src/ollama-mcp-stdio.ts "$dir/"
  cp container/agent-runner/src/index.ts "$dir/"
done
5

Rebuild and restart

npm run build
./container/build.sh
launchctl kickstart -k gui/$(id -u)/com.nanoclaw

What Changes

Files Created

  • container/agent-runner/src/ollama-mcp-stdio.ts - Ollama MCP server
  • scripts/ollama-watch.sh - macOS notification watcher

Files Modified

  • container/agent-runner/src/index.ts - Adds Ollama MCP server to allowedTools and mcpServers
  • src/container-runner.ts - Surfaces [OLLAMA] logs to host
  • .nanoclaw/state.yaml - Records skill application

Usage

Tools Available

  • ollama_list_models - Lists installed Ollama models
  • ollama_generate - Sends prompt to specified model and returns response

Example Requests

You: Use ollama to summarize this article: [paste article]
Andy: [uses ollama_list_models, then ollama_generate with fast model]

You: Use a local model to translate this to Spanish
Andy: [uses ollama_generate with appropriate model]

You: Use ollama with gemma3:1b to answer: what's 2+2?
Andy: [explicitly uses specified model]

When Claude Uses Ollama

Claude automatically delegates to Ollama for:
  • Quick factual queries
  • Summarization
  • Translation
  • Simple code tasks
  • Repetitive operations
Claude handles:
  • Tool use and orchestration
  • Complex reasoning
  • File operations
  • Final response formatting
Ollama connects to http://host.docker.internal:11434 by default (Docker Desktop). Set OLLAMA_HOST in .env for custom hosts.

Optional: Activity Monitoring

Run the watcher for macOS notifications when Ollama is used:
./scripts/ollama-watch.sh
Check logs:
tail -f logs/nanoclaw.log | grep -i ollama

Troubleshooting

Agent Says “Ollama is not installed”

The agent is trying to run ollama CLI inside the container instead of using MCP tools. This means:
  1. MCP server wasn’t registered - check container/agent-runner/src/index.ts has ollama entry
  2. Per-group source wasn’t updated - re-copy files (see Step 4)
  3. Container wasn’t rebuilt - run ./container/build.sh

”Failed to connect to Ollama”

  1. Verify Ollama is running:
    ollama list
    
  2. Check Docker can reach host:
    docker run --rm curlimages/curl curl -s http://host.docker.internal:11434/api/tags
    
  3. If using custom host, check OLLAMA_HOST in .env

Agent Doesn’t Use Ollama Tools

Be explicit:
You: Use the ollama_generate tool with gemma3:1b to answer: [question]
Or:
You: Use a local model via ollama to [task]

No Models Available

Pull a model on the host:
ollama pull gemma3:1b
Verify it appears:
ollama list

Build docs developers (and LLMs) love