Skip to main content
OpenCode is an open-source AI coding assistant supporting multiple model providers including Anthropic, OpenAI, and local models via Ollama.

Overview

OpenCode (opencode) provides flexible AI coding with support for Claude, GPT, Gemini, and local Ollama models like Qwen3.
OpenCode supports session resume, plan mode, image input, cost tracking, and works with both cloud and local models.

Installation

1

Install OpenCode

Install via the official installer:
curl -fsSL https://install.opencode.ai | sh
Default installation location: ~/.opencode/bin
2

Configure API Keys

Set API keys for your chosen providers:
# Anthropic Claude
export ANTHROPIC_API_KEY=sk-ant-...

# OpenAI
export OPENAI_API_KEY=sk-...

# Or use local models (no API key needed)
3

Verify Installation

Test OpenCode:
opencode --version

Capabilities

OpenCode provides comprehensive AI features:
supportsResume
boolean
default:"true"
Resume sessions with --session flag
supportsReadOnlyMode
boolean
default:"true"
Plan mode with --agent plan
supportsJsonOutput
boolean
default:"true"
Structured JSON with --format json
supportsSessionId
boolean
default:"true"
Session IDs in sessionID field (camelCase)
supportsImageInput
boolean
default:"true"
Attach images with -f, --file flag
supportsImageInputOnResume
boolean
default:"true"
Images work with --session flag
supportsSessionStorage
boolean
default:"true"
Sessions in ~/.local/share/opencode/storage/ (JSON files)
supportsCostTracking
boolean
default:"true"
Cost in USD from part.cost in step_finish events
supportsUsageStats
boolean
default:"true"
Token counts in part.tokens from step_finish events
supportsModelSelection
boolean
default:"true"
Select models with --model provider/model
supportsThinkingDisplay
boolean
default:"true"
Streaming text chunks for reasoning
supportsContextMerge
boolean
default:"true"
Receive merged context via prompts
supportsContextExport
boolean
default:"true"
Export session context for transfer
Source: src/main/agents/capabilities.ts:262

Command-Line Arguments

Maestro uses the run subcommand for batch operations:
opencode run --format json [options] "prompt"
run
subcommand
Batch execution mode (required for Maestro)
--format
string
default:"json"
Output format for structured parsing
OpenCode doesn’t use -- separator before the prompt. The prompt is a positional argument.

Resume Mode

Resume an existing session:
opencode run --format json --session {sessionId} "prompt"
--session
string
Session ID to resume

Plan Mode (Read-Only)

For analysis without modifications:
opencode run --format json --agent plan "prompt"
--agent
string
default:"plan"
Agent type for execution (plan = read-only)

Model Selection

Specify a model:
opencode run --format json --model ollama/qwen3:8b "prompt"
--model
string
Model in provider/model formatExamples:
  • anthropic/claude-sonnet-4-20250514
  • openai/gpt-4o
  • ollama/qwen3:8b

File/Image Input

Attach a file or image:
opencode run --format json -f /path/to/image.png "prompt"
-f, --file
string
Path to file or image
Source: src/main/agents/definitions.ts:202

Configuration

Configure OpenCode in Maestro’s agent settings:

Model Override

model
string
default:""
Model to use (empty = OpenCode’s default)Format: provider/modelExamples:
  • ollama/qwen3:8b
  • anthropic/claude-sonnet-4-20250514
  • openai/gpt-4o

Context Window

contextWindow
number
default:"128000"
Token limit for the selected modelVaries by model (e.g., 400,000 for Claude/GPT-5.2)
Source: src/main/agents/definitions.ts:234

Environment Variables

OpenCode requires special environment configuration for batch mode:

YOLO Mode (Auto-approve)

Maestro sets this by default to prevent permission prompts:
OPENCODE_CONFIG_CONTENT='{"permission":{"*":"allow","external_directory":"allow","question":"deny"},"tools":{"question":false}}'
permission.*
string
default:"allow"
Allow all permissions by default
permission.external_directory
string
default:"allow"
Allow access to directories outside project
permission.question
string
default:"deny"
Disable interactive questions (prevents stdin hangs)
tools.question
boolean
default:"false"
Disable question tool (prevents stdin hangs)

Read-Only Mode Override

In plan mode, Maestro strips blanket permissions:
OPENCODE_CONFIG_CONTENT='{"permission":{"question":"deny"},"tools":{"question":false}}'
Source: src/main/agents/definitions.ts:224

Session Storage

OpenCode stores sessions as JSON files:
~/.local/share/opencode/storage/{session-id}.json
Each file contains:
  • Complete conversation history
  • Model configuration
  • Token usage and costs
  • File operations log
Maestro can import and resume OpenCode sessions. Source: src/main/storage/opencode-session-storage.ts

Output Format

OpenCode outputs newline-delimited JSON events:
{
  "event": "step_start",
  "sessionID": "abc123",
  "model": "ollama/qwen3:8b"
}
Source: src/main/parsers/opencode-parser.ts

Supported Models

OpenCode supports multiple providers:
  • anthropic/claude-sonnet-4-20250514
  • anthropic/claude-opus-4
  • anthropic/claude-haiku-4
  • openai/gpt-4o
  • openai/gpt-5.1
  • openai/o3
  • ollama/qwen3:8b
  • ollama/codellama:34b
  • ollama/deepseek-coder:33b
  • google/gemini-2.0-flash
  • google/gemini-pro

Error Patterns

Common errors Maestro detects:
API_KEY_MISSING
error
Pattern: API key not foundSolution: Set appropriate API key (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)
MODEL_NOT_FOUND
error
Pattern: model not foundSolution: Check model name format and availability
PERMISSION_DENIED
error
Pattern: permission deniedSolution: Check OPENCODE_CONFIG_CONTENT or file permissions
Source: src/main/parsers/error-patterns.ts

Usage with Maestro Features

Auto Run

Full support for playbooks with multi-model flexibility

Group Chat

Multi-agent collaboration with other providers

Local Models

Use Ollama for offline/private coding assistance

Context Grooming

Export and merge conversation context

Best Practices

1

Choose Model Based on Task

  • Local models (Ollama) for privacy/offline
  • Claude for complex reasoning
  • GPT for general tasks
2

Configure Context Window

Set correct context window in agent settings to match your model
3

Monitor Costs

OpenCode tracks costs per request - review usage dashboard regularly
4

Use Plan Mode

Enable --agent plan for read-only analysis tasks

Troubleshooting

Verify installation:
which opencode
opencode --version
Check default path:
ls ~/.opencode/bin/opencode
For Ollama models, ensure Ollama is running:
ollama list
ollama pull qwen3:8b
Verify OPENCODE_CONFIG_CONTENT is set correctly. Maestro should set this automatically.

Build docs developers (and LLMs) love