Overview
OpenCode is a terminal-based AI coding assistant built in Go, designed with a modular architecture that separates concerns and enables extensibility through protocols like LSP and MCP.High-level architecture
Core components
1. Command layer (cmd/)
Entry point for the CLI application.
cmd/root.go
cmd/root.go
Responsibilities:
- Parse command-line arguments and flags
- Initialize configuration system
- Bootstrap the application
- Handle global flags (debug, version, etc.)
Execute()- Main CLI entry point- Flag handling for debug mode, config paths
2. Configuration (internal/config/)
Manages application configuration from multiple sources.
internal/config/config.go
internal/config/config.go
Responsibilities:Key functions:
- Load configuration from files and environment
- Validate model and provider configurations
- Auto-configure defaults based on available credentials
- Merge global and local configurations
Load()- Load and validate configurationValidate()- Validate agents, providers, LSPUpdateAgentModel()- Change model for an agentLoadGitHubToken()- Auto-detect GitHub Copilot credentials
3. LLM layer (internal/llm/)
Handles all AI model interactions.
Models (internal/llm/models/)
Model definitions
Model definitions
Files:Registry:
models.go- Model type definitions and registryanthropic.go- Claude modelsopenai.go- GPT and o-series modelsgemini.go- Google Gemini modelsazure.go- Azure OpenAI modelscopilot.go- GitHub Copilot modelsopenrouter.go- OpenRouter modelsgroq.go- Groq modelsvertexai.go- Vertex AI modelsxai.go- xAI Grok models
SupportedModels map contains all available models, populated at init time.Providers (internal/llm/provider/)
Provider abstraction
Provider abstraction
Files:Event streaming:Factory pattern:
provider.go- Provider interface and factoryanthropic.go- Anthropic API clientopenai.go- OpenAI API clientgemini.go- Google Gemini clientazure.go- Azure OpenAI clientbedrock.go- AWS Bedrock clientvertexai.go- Vertex AI clientcopilot.go- GitHub Copilot client
NewProvider() creates the appropriate provider client based on ModelProvider.4. Terminal UI (internal/tui/)
Bubble Tea-based terminal user interface.
TUI structure
TUI structure
Components:
tui.go- Main TUI model and update loopcomponents/- Reusable UI componentsdialog/- Modal dialogsinput/- Text inputchat/- Chat message displaysidebar/- File/conversation browser
themes/- Color schemes and styling
- Uses Bubble Tea (BTP) framework
- Model-View-Update architecture
- Keyboard-driven navigation
- Streaming LLM responses with live updates
5. Database (internal/db/)
SQLite-based storage for conversations and history.
Database schema
Database schema
Tables:
conversations- Conversation metadatamessages- Individual messagestool_calls- Tool usage historyfiles- File attachmentscontext- Context references
- SQLite with sqlc for type-safe queries
- Automatic conversation compaction
- Token usage tracking
- Full-text search on conversations
6. LSP integration (internal/lsp/)
Language Server Protocol support for code intelligence.
LSP capabilities
LSP capabilities
Features:
- Code completion
- Go-to-definition
- Find references
- Diagnostics (errors/warnings)
- Code actions
- TypeScript/JavaScript (via
typescript-language-server) - Python (via
pylsp) - Go (via
gopls) - Rust (via
rust-analyzer) - And any LSP-compatible server
- Spawns LSP servers as subprocesses
- JSON-RPC communication over stdio
- Per-language server instances
7. MCP integration
Model Context Protocol for extensible tool support.MCP architecture
MCP architecture
Server types:
- stdio - Standard input/output communication
- sse - Server-Sent Events over HTTP
- Dynamic tool discovery from MCP servers
- Tool execution with streaming results
- Multi-server support
- Environment isolation
@modelcontextprotocol/server-filesystem- File operations@modelcontextprotocol/server-github- GitHub integration- Custom MCP servers
Agent system
OpenCode uses specialized agents for different tasks:Agent types
Coder
Main coding agent for writing, editing, and debugging codeTools: All coding tools (read, write, edit, bash, etc.)
Task
Code search and analysis agentTools: Read-only tools (glob, grep, read)
Title
Conversation summarization (internal)Tools: None (text generation only)
Agent configuration
Each agent can use a different model optimized for its task:Tool system
OpenCode provides AI models with tools for interacting with the codebase:Built-in tools
- File operations
- Execution
- LSP tools
- MCP tools
- read - Read file contents with line numbers
- write - Create or overwrite files
- edit - Make precise edits with find/replace
- glob - Find files by pattern
- grep - Search file contents with regex
Tool execution flow
Message flow
Typical conversation flow:- User input → TUI captures message
- Context building → Load context files, LSP diagnostics
- LLM request → Send messages + tools to provider
- Streaming response → Receive events:
content_delta- Text chunksthinking_delta- Reasoning (for o-series/Claude)tool_use_start/delta/stop- Tool calls
- Tool execution → Execute requested tools
- Tool results → Add results to conversation
- Continue → LLM generates final response
- Display → TUI renders formatted output
- Storage → Save to database
Data flow
Configuration loading
Conversation storage
Performance optimizations
Token management
- Prompt caching - Reuse context across requests (Anthropic, OpenAI)
- Auto-compaction - Intelligently summarize old messages
- Context limiting - Respect model context windows
Streaming
- Server-Sent Events - Real-time response streaming
- Incremental rendering - Update UI as tokens arrive
- Tool execution - Parallel tool execution where possible
Database
- SQLite WAL mode - Better concurrent access
- Prepared statements - Type-safe queries via sqlc
- Indexes - Optimized for conversation lookup
Security considerations
API key storage
- Prefer environment variables over config files
- Config files should have restricted permissions (0600)
- Never commit
.opencode.jsonwith secrets to version control
Tool execution
- Bash tool uses configured shell (default: user’s
$SHELL) - No automatic command execution without user confirmation in future versions
- Tool results are sanitized before sending to LLM
MCP servers
- MCP servers run as separate processes
- Environment isolation per server
- Stdio communication (no network exposure by default)
Extension points
Adding a new provider
- Define models in
internal/llm/models/{provider}.go - Implement
ProviderClientininternal/llm/provider/{provider}.go - Add to provider factory in
provider.go - Update configuration schema in
opencode-schema.json
Adding a new tool
- Define tool schema in
internal/llm/tools/ - Implement tool execution logic
- Register with tool registry
- Add to appropriate agent’s tool list
Adding a new theme
- Create theme file in
internal/tui/themes/ - Define color scheme
- Add to theme enum in
opencode-schema.json
Dependencies
Key external dependencies:- Bubble Tea - Terminal UI framework
- Viper - Configuration management
- sqlc - Type-safe SQL queries
- Cobra - CLI framework
- Provider SDKs (Anthropic, OpenAI, etc.)
Build and deployment
OpenCode is distributed as:- Binary releases - GoReleaser builds for multiple platforms
- Install script - Automated installation via
installscript - Source - Build from source with
go build
Related topics
- Configuration - Configuration system details
- AI Models - Supported models and providers