Skip to main content

Overview

OpenCode is a terminal-based AI coding assistant built in Go, designed with a modular architecture that separates concerns and enables extensibility through protocols like LSP and MCP.

High-level architecture

┌─────────────────────────────────────────────────────────────┐
│                         CLI Entry                           │
│                     (cmd/root.go)                           │
└────────────────────────┬────────────────────────────────────┘

          ┌──────────────┴──────────────┐
          ▼                             ▼
┌─────────────────┐           ┌─────────────────┐
│  Configuration  │           │   TUI Layer     │
│  (internal/     │◄──────────┤  (internal/tui) │
│   config)       │           │                 │
└────────┬────────┘           └────────┬────────┘
         │                             │
         │                    ┌────────┴────────┐
         │                    │                 │
         ▼                    ▼                 ▼
┌─────────────────┐  ┌─────────────┐  ┌─────────────┐
│   LLM Layer     │  │  App Logic  │  │  Database   │
│  (internal/llm) │  │(internal/app)│  │(internal/db)│
└────────┬────────┘  └─────────────┘  └─────────────┘

    ┌────┴────┬──────────┬──────────┐
    ▼         ▼          ▼          ▼
┌────────┐ ┌─────┐  ┌────────┐ ┌────────┐
│Provider│ │Tools│  │  LSP   │ │  MCP   │
│        │ │     │  │        │ │        │
└────────┘ └─────┘  └────────┘ └────────┘

Core components

1. Command layer (cmd/)

Entry point for the CLI application.
Responsibilities:
  • Parse command-line arguments and flags
  • Initialize configuration system
  • Bootstrap the application
  • Handle global flags (debug, version, etc.)
Key functions:
  • Execute() - Main CLI entry point
  • Flag handling for debug mode, config paths

2. Configuration (internal/config/)

Manages application configuration from multiple sources.
Responsibilities:
  • Load configuration from files and environment
  • Validate model and provider configurations
  • Auto-configure defaults based on available credentials
  • Merge global and local configurations
Key types:
type Config struct {
    Data         Data                              // Storage location
    WorkingDir   string                            // Working directory
    MCPServers   map[string]MCPServer              // MCP servers
    Providers    map[models.ModelProvider]Provider // LLM providers
    LSP          map[string]LSPConfig              // LSP servers
    Agents       map[AgentName]Agent               // Agent configs
    Debug        bool                              // Debug mode
    ContextPaths []string                          // Context files
    TUI          TUIConfig                         // UI theme
    Shell        ShellConfig                       // Shell config
}

type Agent struct {
    Model           models.ModelID
    MaxTokens       int64
    ReasoningEffort string
}
Key functions:
  • Load() - Load and validate configuration
  • Validate() - Validate agents, providers, LSP
  • UpdateAgentModel() - Change model for an agent
  • LoadGitHubToken() - Auto-detect GitHub Copilot credentials

3. LLM layer (internal/llm/)

Handles all AI model interactions.

Models (internal/llm/models/)

Files:
  • models.go - Model type definitions and registry
  • anthropic.go - Claude models
  • openai.go - GPT and o-series models
  • gemini.go - Google Gemini models
  • azure.go - Azure OpenAI models
  • copilot.go - GitHub Copilot models
  • openrouter.go - OpenRouter models
  • groq.go - Groq models
  • vertexai.go - Vertex AI models
  • xai.go - xAI Grok models
Key types:
type ModelID string
type ModelProvider string

type Model struct {
    ID                  ModelID
    Name                string
    Provider            ModelProvider
    APIModel            string          // API model identifier
    CostPer1MIn         float64         // Input cost
    CostPer1MOut        float64         // Output cost
    CostPer1MInCached   float64         // Cached input cost
    CostPer1MOutCached  float64         // Cached output cost
    ContextWindow       int64           // Max context tokens
    DefaultMaxTokens    int64           // Default max output
    CanReason           bool            // Supports reasoning
    SupportsAttachments bool            // Supports files
}
Registry: SupportedModels map contains all available models, populated at init time.

Providers (internal/llm/provider/)

Files:
  • provider.go - Provider interface and factory
  • anthropic.go - Anthropic API client
  • openai.go - OpenAI API client
  • gemini.go - Google Gemini client
  • azure.go - Azure OpenAI client
  • bedrock.go - AWS Bedrock client
  • vertexai.go - Vertex AI client
  • copilot.go - GitHub Copilot client
Key interface:
type Provider interface {
    SendMessages(ctx context.Context, 
                 messages []message.Message, 
                 tools []tools.BaseTool) (*ProviderResponse, error)
    
    StreamResponse(ctx context.Context, 
                   messages []message.Message, 
                   tools []tools.BaseTool) <-chan ProviderEvent
    
    Model() models.Model
}
Event streaming:
type EventType string

const (
    EventContentStart  EventType = "content_start"
    EventContentDelta  EventType = "content_delta"
    EventThinkingDelta EventType = "thinking_delta"
    EventToolUseStart  EventType = "tool_use_start"
    EventToolUseDelta  EventType = "tool_use_delta"
    EventToolUseStop   EventType = "tool_use_stop"
    EventContentStop   EventType = "content_stop"
    EventComplete      EventType = "complete"
    EventError         EventType = "error"
)
Factory pattern: NewProvider() creates the appropriate provider client based on ModelProvider.

4. Terminal UI (internal/tui/)

Bubble Tea-based terminal user interface.
Components:
  • tui.go - Main TUI model and update loop
  • components/ - Reusable UI components
    • dialog/ - Modal dialogs
    • input/ - Text input
    • chat/ - Chat message display
    • sidebar/ - File/conversation browser
  • themes/ - Color schemes and styling
Key patterns:
  • Uses Bubble Tea (BTP) framework
  • Model-View-Update architecture
  • Keyboard-driven navigation
  • Streaming LLM responses with live updates

5. Database (internal/db/)

SQLite-based storage for conversations and history.
Tables:
  • conversations - Conversation metadata
  • messages - Individual messages
  • tool_calls - Tool usage history
  • files - File attachments
  • context - Context references
Key features:
  • SQLite with sqlc for type-safe queries
  • Automatic conversation compaction
  • Token usage tracking
  • Full-text search on conversations

6. LSP integration (internal/lsp/)

Language Server Protocol support for code intelligence.
Features:
  • Code completion
  • Go-to-definition
  • Find references
  • Diagnostics (errors/warnings)
  • Code actions
Supported languages:
  • TypeScript/JavaScript (via typescript-language-server)
  • Python (via pylsp)
  • Go (via gopls)
  • Rust (via rust-analyzer)
  • And any LSP-compatible server
Architecture:
  • Spawns LSP servers as subprocesses
  • JSON-RPC communication over stdio
  • Per-language server instances

7. MCP integration

Model Context Protocol for extensible tool support.
Server types:
  • stdio - Standard input/output communication
  • sse - Server-Sent Events over HTTP
Key features:
  • Dynamic tool discovery from MCP servers
  • Tool execution with streaming results
  • Multi-server support
  • Environment isolation
Example servers:
  • @modelcontextprotocol/server-filesystem - File operations
  • @modelcontextprotocol/server-github - GitHub integration
  • Custom MCP servers

Agent system

OpenCode uses specialized agents for different tasks:

Agent types

Coder

Main coding agent for writing, editing, and debugging codeTools: All coding tools (read, write, edit, bash, etc.)

Task

Code search and analysis agentTools: Read-only tools (glob, grep, read)

Title

Conversation summarization (internal)Tools: None (text generation only)

Agent configuration

Each agent can use a different model optimized for its task:
{
  "agents": {
    "coder": {
      "model": "claude-4-sonnet",
      "maxTokens": 50000,
      "reasoningEffort": "high"
    },
    "task": {
      "model": "gpt-4.1-mini",
      "maxTokens": 5000
    },
    "title": {
      "model": "gpt-4o-mini",
      "maxTokens": 80
    }
  }
}

Tool system

OpenCode provides AI models with tools for interacting with the codebase:

Built-in tools

  • read - Read file contents with line numbers
  • write - Create or overwrite files
  • edit - Make precise edits with find/replace
  • glob - Find files by pattern
  • grep - Search file contents with regex

Tool execution flow

┌──────────────┐
│  LLM Model   │ Generates tool call request
└──────┬───────┘


┌──────────────┐
│  Tool Router │ Routes to appropriate handler
└──────┬───────┘

   ┌───┴────┬──────────┬────────────┐
   ▼        ▼          ▼            ▼
┌─────┐ ┌──────┐  ┌──────┐    ┌──────┐
│File │ │ Bash │  │ LSP  │    │ MCP  │
│Tools│ │ Tool │  │Tools │    │Tools │
└──┬──┘ └───┬──┘  └───┬──┘    └───┬──┘
   │        │         │            │
   └────────┴─────────┴────────────┘


        ┌──────────┐
        │  Result  │ Return to LLM
        └──────────┘

Message flow

Typical conversation flow:
  1. User input → TUI captures message
  2. Context building → Load context files, LSP diagnostics
  3. LLM request → Send messages + tools to provider
  4. Streaming response → Receive events:
    • content_delta - Text chunks
    • thinking_delta - Reasoning (for o-series/Claude)
    • tool_use_start/delta/stop - Tool calls
  5. Tool execution → Execute requested tools
  6. Tool results → Add results to conversation
  7. Continue → LLM generates final response
  8. Display → TUI renders formatted output
  9. Storage → Save to database

Data flow

Configuration loading

┌─────────────────┐
│  Built-in       │
│  Defaults       │
└────────┬────────┘


┌─────────────────┐
│  Global Config  │  ~/.opencode.json
│  ~/.opencode    │
└────────┬────────┘


┌─────────────────┐
│  Environment    │  API keys, debug flags
│  Variables      │
└────────┬────────┘


┌─────────────────┐
│  Local Config   │  ./.opencode.json
│  (Project)      │
└────────┬────────┘


┌─────────────────┐
│  Command-line   │  --debug, etc.
│  Flags          │
└────────┬────────┘


    ┌────────┐
    │ Merged │
    │ Config │
    └────────┘

Conversation storage

┌─────────────────────────────────────┐
│           Conversation              │
├─────────────────────────────────────┤
│ id: uuid                            │
│ title: string                       │
│ created_at: timestamp               │
│ updated_at: timestamp               │
│ model: string                       │
│ total_tokens: int                   │
└──────────────┬──────────────────────┘

               │ 1:N

┌─────────────────────────────────────┐
│            Messages                 │
├─────────────────────────────────────┤
│ id: uuid                            │
│ conversation_id: uuid (FK)          │
│ role: user|assistant|system         │
│ content: text                       │
│ tokens: int                         │
│ created_at: timestamp               │
└──────────────┬──────────────────────┘

               │ 1:N

┌─────────────────────────────────────┐
│           Tool Calls                │
├─────────────────────────────────────┤
│ id: uuid                            │
│ message_id: uuid (FK)               │
│ tool_name: string                   │
│ arguments: json                     │
│ result: text                        │
│ status: success|error               │
└─────────────────────────────────────┘

Performance optimizations

Token management

  • Prompt caching - Reuse context across requests (Anthropic, OpenAI)
  • Auto-compaction - Intelligently summarize old messages
  • Context limiting - Respect model context windows

Streaming

  • Server-Sent Events - Real-time response streaming
  • Incremental rendering - Update UI as tokens arrive
  • Tool execution - Parallel tool execution where possible

Database

  • SQLite WAL mode - Better concurrent access
  • Prepared statements - Type-safe queries via sqlc
  • Indexes - Optimized for conversation lookup

Security considerations

API key storage

  • Prefer environment variables over config files
  • Config files should have restricted permissions (0600)
  • Never commit .opencode.json with secrets to version control

Tool execution

  • Bash tool uses configured shell (default: user’s $SHELL)
  • No automatic command execution without user confirmation in future versions
  • Tool results are sanitized before sending to LLM

MCP servers

  • MCP servers run as separate processes
  • Environment isolation per server
  • Stdio communication (no network exposure by default)

Extension points

Adding a new provider

  1. Define models in internal/llm/models/{provider}.go
  2. Implement ProviderClient in internal/llm/provider/{provider}.go
  3. Add to provider factory in provider.go
  4. Update configuration schema in opencode-schema.json

Adding a new tool

  1. Define tool schema in internal/llm/tools/
  2. Implement tool execution logic
  3. Register with tool registry
  4. Add to appropriate agent’s tool list

Adding a new theme

  1. Create theme file in internal/tui/themes/
  2. Define color scheme
  3. Add to theme enum in opencode-schema.json

Dependencies

Key external dependencies:
  • Bubble Tea - Terminal UI framework
  • Viper - Configuration management
  • sqlc - Type-safe SQL queries
  • Cobra - CLI framework
  • Provider SDKs (Anthropic, OpenAI, etc.)

Build and deployment

OpenCode is distributed as:
  • Binary releases - GoReleaser builds for multiple platforms
  • Install script - Automated installation via install script
  • Source - Build from source with go build

Build docs developers (and LLMs) love