Skip to main content

The Core Insight

Think of AI models as developers on a team. Each has a different brain, different personality, different strengths.
A model isn’t just “smarter” or “dumber.” It thinks differently.Give the same instruction to Claude and GPT, and they’ll interpret it in fundamentally different ways.
Oh My OpenCode assigns each agent a model that matches its working style—like building a team where each person is in the role that fits their personality.

How Claude and GPT Think Differently

Mechanics-Driven

Best with:
  • Detailed checklists
  • Step-by-step procedures
  • Explicit templates
  • Nested workflows
Philosophy: More rules = more compliance. You can write a 1,100-line prompt with nested workflows and Claude will follow every step.Example agent: Prometheus’s Claude prompt is ~1,100 lines across 7 files.
Agents that support both families (Prometheus, Atlas) auto-detect your model at runtime and switch prompts.

Agent Profiles

Communicators → Claude / Kimi / GLM

These agents have Claude-optimized prompts—long, detailed, mechanics-driven. They need models that reliably follow complex, multi-layered instructions.
Role: Main orchestrator
Model: Claude Opus 4.6
Why Claude:
  • Follows complex multi-step instructions (prompt is ~1,100 lines)
  • Maintains conversation flow across many tool calls
  • Understands nuanced delegation patterns
  • Produces well-structured output
Fallback chain:
Claude Opus → Kimi K2.5 → GLM 5
Notes:
  • No GPT prompt exists — Sisyphus is Claude-family only
  • Using Sisyphus with GPT would be like taking your best project manager and sticking them in a room alone to debug a race condition
Role: Plan gap analyzer
Model: Claude Opus 4.6
Why Claude:
  • Excellent at finding ambiguities
  • Detects AI-slop patterns
  • Identifies hidden intentions
Fallback chain:
Claude Opus → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro
Notes: Claude preferred, GPT acceptable fallback.

Dual-Prompt Agents → Claude preferred, GPT supported

These agents ship separate prompts for Claude and GPT families. They auto-detect your model and switch at runtime.
Role: Strategic planner
Model: Claude Opus 4.6 (with extended thinking)
Why dual-prompt:
  • Claude: detailed mechanics for interview process
  • GPT: compact principle-driven planning
Fallback chain:
Claude Opus → GPT-5.2 → Kimi K2.5 → Gemini 3 Pro
Auto-detection:
if (isGptModel(modelID)) {
  return GPT_PROMETHEUS_PROMPT  // ~121 lines
} else {
  return CLAUDE_PROMETHEUS_PROMPT  // ~1,100 lines
}
Role: Todo orchestrator
Model: Kimi K2.5
Why Kimi: Claude-like behavior at lower cost. Sweet spot for orchestration.Fallback chain:
Kimi K2.5 → Claude Sonnet → GPT-5.2
Auto-detection: Switches between Claude-optimized and GPT-optimized prompts.

Deep Specialists → GPT

These agents are built for GPT’s principle-driven style. Their prompts assume autonomous, goal-oriented execution.
Role: Autonomous deep worker
Model: GPT-5.3 Codex (medium reasoning)
Why GPT:
  • Deep autonomous exploration
  • Multi-file reasoning
  • Principle-driven execution (goal, not recipe)
  • Works independently for extended periods
Fallback chain:
GPT-5.3 Codex only (no fallback)
Notes:
  • Do not override to Claude. Hephaestus is built for Codex’s autonomous style.
  • Named after Greek god of forge and craftsmanship.
  • Inspired by AmpCode’s deep mode.
Role: Architecture consultant
Model: GPT-5.2
Why GPT:
  • High-IQ strategic reasoning
  • Deep logical analysis
  • Read-only consultation
Fallback chain:
GPT-5.2 → Gemini 3 Pro → Claude Opus
Role: Ruthless plan reviewer
Model: GPT-5.2
Why GPT:
  • Verification and critique
  • Different perspective from Claude
  • Rigorous logic
Fallback chain:
GPT-5.2 → Claude Opus → Gemini 3 Pro

Utility Runners → Speed over Intelligence

These agents do grep, search, and retrieval. They intentionally use the fastest, cheapest models available.
Don’t “upgrade” utility agents to Opus. That’s hiring a senior engineer to file paperwork.
Role: Fast codebase grep
Model: Grok Code Fast 1
Why fast/cheap:
  • Speed is everything
  • Fire 10 in parallel
  • Simple pattern matching
Fallback chain:
Grok Code Fast → MiniMax → Haiku → GPT-5-Nano
Role: Docs/code search
Model: Gemini Flash
Why fast/cheap:
  • Doc retrieval doesn’t need deep reasoning
  • High volume of searches
Fallback chain:
Gemini Flash → MiniMax → GLM
Role: Vision/screenshots
Model: Kimi K2.5
Why Kimi:
  • Excels at multimodal understanding
  • Good image analysis
Fallback chain:
Kimi K2.5 → Gemini Flash → GPT-5.2 → GLM-4.6v

Model Families

Claude Family

Characteristics:
  • Communicative
  • Instruction-following
  • Structured output
ModelStrengthsUse For
Claude Opus 4.6Best overall. Highest compliance with complex prompts.Sisyphus, Prometheus
Claude Sonnet 4.6Faster, cheaper. Good balance.Everyday tasks
Claude Haiku 4.5Fast and cheap.Quick tasks, utility work
Kimi K2.5Claude-like behavior at lower cost.Atlas, all-rounder
GLM 5Claude-like, solid for orchestration.Cost-effective orchestration

GPT Family

Characteristics:
  • Principle-driven
  • Explicit reasoning
  • Deep technical capability
ModelStrengthsUse For
GPT-5.3 CodexDeep coding powerhouse. Autonomous exploration.Hephaestus (required)
GPT-5.2High intelligence, strategic reasoning.Oracle, Momus
GPT-5-NanoUltra-cheap, fast.Simple utility tasks

Other Models

ModelStrengthsUse For
Gemini 3 ProVisual/frontend tasks. Different reasoning style.visual-engineering, artistry
Gemini 3 FlashFast. Doc search and light tasks.Librarian
Grok Code Fast 1Blazing fast code grep.Explore
MiniMax M2.5Fast and smart. Utility tasks.Fallback for search/retrieval

Free-Tier Fallbacks

You may see model names like kimi-k2.5-free, minimax-m2.5-free, or big-pickle (GLM 4.6) in logs. These are free-tier versions served through OpenCode Zen provider.
You don’t need to configure free-tier fallbacks. The system includes them automatically so it degrades gracefully when you don’t have every paid subscription.

Task Categories

When agents delegate work, they don’t pick a model name—they pick a category. The category maps to the right model automatically.
CategoryWhen UsedFallback Chain
visual-engineeringFrontend, UI, CSS, designGemini 3 Pro → GLM 5 → Claude Opus
ultrabrainMaximum reasoning neededGPT-5.3 Codex → Gemini 3 Pro → Claude Opus
deepDeep coding, complex logicGPT-5.3 Codex → Claude Opus → Gemini 3 Pro
artistryCreative, novel approachesGemini 3 Pro → Claude Opus → GPT-5.2
quickSimple, fast tasksClaude Haiku → Gemini Flash → GPT-5-Nano
unspecified-highGeneral complex workClaude Opus → GPT-5.2 → Gemini 3 Pro
unspecified-lowGeneral standard workClaude Sonnet → GPT-5.3 Codex → Gemini Flash
writingText, docs, proseGemini Flash → Claude Sonnet
See the Orchestration Guide for how agents dispatch tasks to categories.

Customization

Example Configuration

{
  "$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/dev/assets/oh-my-opencode.schema.json",

  "agents": {
    // Main orchestrator: Claude Opus or Kimi K2.5 work best
    "sisyphus": {
      "model": "kimi-for-coding/k2p5",
      "ultrawork": { 
        "model": "anthropic/claude-opus-4-6", 
        "variant": "max" 
      }
    },

    // Research agents: cheaper models are fine
    "librarian": { "model": "zai-coding-plan/glm-4.7" },
    "explore":   { "model": "github-copilot/grok-code-fast-1" },

    // Architecture consultation: GPT or Claude Opus
    "oracle": { 
      "model": "openai/gpt-5.2", 
      "variant": "high" 
    },

    // Prometheus inherits sisyphus model; just add prompt guidance
    "prometheus": { 
      "prompt_append": "Leverage deep & quick agents heavily, always in parallel." 
    }
  },

  "categories": {
    "quick": { "model": "opencode/gpt-5-nano" },
    "unspecified-low": { "model": "kimi-for-coding/k2p5" },
    "unspecified-high": { 
      "model": "anthropic/claude-sonnet-4-6", 
      "variant": "max" 
    },
    "visual-engineering": { 
      "model": "google/gemini-3-pro", 
      "variant": "high" 
    },
    "writing": { "model": "kimi-for-coding/k2p5" }
  },

  // Limit expensive providers; let cheap ones run freely
  "background_task": {
    "providerConcurrency": { 
      "anthropic": 3, 
      "openai": 3, 
      "opencode": 10, 
      "zai-coding-plan": 10 
    },
    "modelConcurrency": { 
      "anthropic/claude-opus-4-6": 2, 
      "opencode/gpt-5-nano": 20 
    }
  }
}

Safe vs Dangerous Overrides

Same personality type:
{
  "agents": {
    // Sisyphus: Opus → Sonnet, Kimi K2.5, GLM 5
    // All communicative models
    "sisyphus": { "model": "kimi-for-coding/k2p5" },
    
    // Prometheus: Opus → GPT-5.2
    // Auto-switches to GPT prompt
    "prometheus": { "model": "openai/gpt-5.2" },
    
    // Atlas: Kimi K2.5 → Sonnet, GPT-5.2
    // Auto-switches to GPT prompt
    "atlas": { "model": "anthropic/claude-sonnet-4-6" }
  }
}

Model Resolution

Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers.
Agent Request → User Override (if configured) → Fallback Chain → System Default
You don’t need to configure providers per model—just authenticate (opencode auth login) and the system figures out which models are available and where.

Checking Available Models

# List all models you have access to
opencode models

# Run diagnostics
bunx oh-my-opencode doctor --verbose

Common Questions

No. Sisyphus has no GPT prompt. It’s designed for Claude’s mechanics-driven instruction-following. Using GPT will significantly degrade performance.If you want GPT-style reasoning, use Hephaestus instead.
Hephaestus is built around GPT-5.3 Codex’s autonomous exploration style. Its prompt assumes:
  • Goal-oriented execution
  • Minimal hand-holding
  • Deep independent reasoning
Claude’s mechanics-driven style doesn’t match this working pattern.
Prometheus auto-detects Claude vs GPT and switches prompts accordingly. Safe options:
  • Claude: Opus, Sonnet, Kimi K2.5, GLM 5
  • GPT: GPT-5.2, GPT-5.3 Codex
  • Gemini: Gemini 3 Pro (uses Claude-style prompt)
Target utility agents first:
{
  "agents": {
    "explore": { "model": "opencode/gpt-5-nano" },
    "librarian": { "model": "opencode/gpt-5-nano" }
  },
  "categories": {
    "quick": { "model": "opencode/gpt-5-nano" }
  }
}
Keep Sisyphus/Prometheus on quality models—they’re worth it.

Custom Categories

Define domain-specific model presets

Background Agents

Concurrency limits per model/provider

Prometheus Planning

Why Prometheus uses Claude Opus with extended thinking

Configuration

Full agent configuration reference

Build docs developers (and LLMs) love