Overview
OpenCode (opencode) provides flexible AI coding with support for Claude, GPT, Gemini, and local Ollama models like Qwen3.
OpenCode supports session resume, plan mode, image input, cost tracking, and works with both cloud and local models.
Installation
Capabilities
OpenCode provides comprehensive AI features:Resume sessions with
--session flagPlan mode with
--agent planStructured JSON with
--format jsonSession IDs in
sessionID field (camelCase)Attach images with
-f, --file flagImages work with
--session flagSessions in
~/.local/share/opencode/storage/ (JSON files)Cost in USD from
part.cost in step_finish eventsToken counts in
part.tokens from step_finish eventsSelect models with
--model provider/modelStreaming text chunks for reasoning
Receive merged context via prompts
Export session context for transfer
src/main/agents/capabilities.ts:262
Command-Line Arguments
Maestro uses therun subcommand for batch operations:
Batch execution mode (required for Maestro)
Output format for structured parsing
OpenCode doesn’t use
-- separator before the prompt. The prompt is a positional argument.Resume Mode
Resume an existing session:Session ID to resume
Plan Mode (Read-Only)
For analysis without modifications:Agent type for execution (plan = read-only)
Model Selection
Specify a model:Model in
provider/model formatExamples:anthropic/claude-sonnet-4-20250514openai/gpt-4oollama/qwen3:8b
File/Image Input
Attach a file or image:Path to file or image
src/main/agents/definitions.ts:202
Configuration
Configure OpenCode in Maestro’s agent settings:Model Override
Model to use (empty = OpenCode’s default)Format:
provider/modelExamples:ollama/qwen3:8banthropic/claude-sonnet-4-20250514openai/gpt-4o
Context Window
Token limit for the selected modelVaries by model (e.g., 400,000 for Claude/GPT-5.2)
src/main/agents/definitions.ts:234
Environment Variables
OpenCode requires special environment configuration for batch mode:YOLO Mode (Auto-approve)
Maestro sets this by default to prevent permission prompts:Allow all permissions by default
Allow access to directories outside project
Disable interactive questions (prevents stdin hangs)
Disable question tool (prevents stdin hangs)
Read-Only Mode Override
In plan mode, Maestro strips blanket permissions:src/main/agents/definitions.ts:224
Session Storage
OpenCode stores sessions as JSON files:- Complete conversation history
- Model configuration
- Token usage and costs
- File operations log
src/main/storage/opencode-session-storage.ts
Output Format
OpenCode outputs newline-delimited JSON events:src/main/parsers/opencode-parser.ts
Supported Models
OpenCode supports multiple providers:Anthropic Claude
Anthropic Claude
anthropic/claude-sonnet-4-20250514anthropic/claude-opus-4anthropic/claude-haiku-4
OpenAI
OpenAI
openai/gpt-4oopenai/gpt-5.1openai/o3
Ollama (Local)
Ollama (Local)
ollama/qwen3:8bollama/codellama:34bollama/deepseek-coder:33b
Google Gemini
Google Gemini
google/gemini-2.0-flashgoogle/gemini-pro
Error Patterns
Common errors Maestro detects:Pattern:
API key not foundSolution: Set appropriate API key (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)Pattern:
model not foundSolution: Check model name format and availabilityPattern:
permission deniedSolution: Check OPENCODE_CONFIG_CONTENT or file permissionssrc/main/parsers/error-patterns.ts
Usage with Maestro Features
Auto Run
Full support for playbooks with multi-model flexibility
Group Chat
Multi-agent collaboration with other providers
Local Models
Use Ollama for offline/private coding assistance
Context Grooming
Export and merge conversation context
Best Practices
Choose Model Based on Task
- Local models (Ollama) for privacy/offline
- Claude for complex reasoning
- GPT for general tasks
Troubleshooting
OpenCode not detected
OpenCode not detected
Verify installation:Check default path:
Model not available
Model not available
For Ollama models, ensure Ollama is running:
Permission prompts hang
Permission prompts hang
Verify OPENCODE_CONFIG_CONTENT is set correctly. Maestro should set this automatically.