Skip to main content
Goose is built with a layered, extensible architecture designed to support multiple interfaces, AI providers, and tool integrations. This design enables developers to customize every aspect of the system—from UI to model providers to custom tools.

System Overview

Goose follows a classic client-server architecture with pluggable components at each layer:

Core Components

1. User Interfaces

Goose provides multiple ways to interact with the agent:
  • CLI (goose-cli): Command-line interface for terminal users
  • Desktop App: Electron-based application with rich UI features
  • Custom Interfaces: Build your own via REST API or Agent Client Protocol

2. Server Layer

The server layer provides two integration protocols:

REST API (goose-server)

HTTP-based API suitable for web applications and simple integrations.
// Located in: crates/goose-server/src/
// Key endpoints:
// POST /sessions              - Create a new session
// POST /sessions/{id}/messages - Send messages with streaming
// GET  /extensions            - List available extensions

Agent Client Protocol (ACP)

JSON-RPC protocol for richer integrations with bidirectional communication.
// Located in: crates/goose-acp/
// Supports:
// - Bidirectional communication
// - Permission requests
// - Tool call status updates
// - Session management

3. Core Layer

The core layer (crates/goose/) contains the main business logic:

Agent

The central orchestrator that manages conversation flow, tool execution, and provider interaction.
// crates/goose/src/agents/agent.rs
pub struct Agent {
    provider: SharedProvider,           // AI model provider
    extension_manager: Arc<ExtensionManager>,  // MCP tools
    prompt_manager: Mutex<PromptManager>,      // System prompts
    retry_manager: RetryManager,               // Error handling
    // ... additional fields
}
Key responsibilities:
  • Managing conversation context
  • Coordinating tool execution
  • Handling provider streaming
  • Managing permissions and security
  • Orchestrating subagents

Extension Manager

Manages MCP (Model Context Protocol) servers that provide tools and resources to the agent.
// crates/goose/src/agents/extension_manager.rs
pub struct ExtensionManager {
    extensions: Mutex<HashMap<String, Extension>>,
    tools_cache: Mutex<Option<Arc<Vec<Tool>>>>,
    // ...
}
Features:
  • Dynamic extension loading/unloading
  • Tool discovery and caching
  • Resource management (MCP resources)
  • OAuth flow handling for authenticated extensions

Provider Registry

Abstracts AI model providers behind a common interface.
// crates/goose/src/providers/base.rs
#[async_trait]
pub trait Provider: Send + Sync {
    async fn complete(
        &self,
        system: String,
        messages: Vec<Message>,
        tools: Vec<Tool>,
    ) -> Result<BoxStream<ProviderMessage>>;
    
    fn list_models(&self) -> Vec<ModelInfo>;
    // ...
}
Supports:
  • 25+ AI providers (Anthropic, OpenAI, Ollama, etc.)
  • Custom provider plugins
  • Model capability detection
  • Token counting and cost estimation

Session Manager

Handles persistent state, conversation history, and session metadata.
// crates/goose/src/session/session_manager.rs
pub struct Session {
    pub id: String,
    pub working_dir: PathBuf,
    pub conversation: Option<Conversation>,
    pub extension_data: ExtensionData,
    pub recipe: Option<Recipe>,
    // ...
}

4. Integration Layer

MCP Servers

Goose uses the Model Context Protocol to integrate external tools:
// Built-in servers in crates/goose-mcp/
// - developer: File operations, shell commands
// - memory: Persistent knowledge storage
// - computercontroller: Desktop automation
// - autovisualiser: Chart generation
Extensions can be:
  • Built-in: Bundled with Goose
  • Stdio: External processes communicating via stdin/stdout
  • SSE: HTTP-based Server-Sent Events

AI Providers

Providers implement the Provider trait to connect to AI models:
// crates/goose/src/providers/
// Examples:
// - anthropic.rs    - Claude models
// - openai.rs       - GPT models
// - ollama.rs       - Local models
// - declarative/    - JSON-defined custom providers

Data Flow

A typical request flows through the system as follows:

Configuration and Customization

Configuration Files

Goose uses a layered configuration system:
~/.config/goose/
├── config.yaml              # User configuration
├── secrets.yaml             # API keys (if keyring disabled)
├── init-config.yaml         # Initial setup defaults
└── custom_providers/        # Custom provider definitions
    └── my-provider.json

Environment Variables

Configuration precedence: Environment > config.yaml > defaults
GOOSE_PROVIDER=anthropic
GOOSE_MODEL=claude-sonnet-4-20250514
ANTHROPIC_API_KEY=sk-...

Recipes

YAML-based configuration for complete agent experiences:
# example-recipe.yaml
title: Code Review Assistant
instructions: |
  You are a code review assistant...
extensions:
  - type: builtin
    name: developer
settings:
  goose_model: claude-sonnet-4-20250514
  max_turns: 50

Security Model

Goose implements multiple security layers:

Permission System

// crates/goose/src/permission/
pub enum PermissionLevel {
    Allow,      // Always allow
    Confirm,    // Ask user
    Deny,       // Always deny
}
Applied to:
  • Tool execution
  • File system access
  • Network requests
  • Environment variable access

Extension Sandboxing

  • Extensions run as separate processes
  • Controlled environment variables (see Envs::DISALLOWED_KEYS)
  • Malware scanning for downloaded extensions
  • Resource limits and timeouts

Session Isolation

  • Each session has isolated state
  • Working directory restrictions
  • Extension data partitioning

Extension Points

The architecture supports customization at multiple levels:
LayerCustomizationDifficulty
UIBuild custom interface using REST API or ACPMedium-High
ProviderAdd custom AI provider via declarative config or trait implementationLow-Medium
ExtensionsCreate MCP server for custom toolsLow-Medium
RecipesDefine workflows and behaviorsLow
PromptsModify system promptsLow
CoreFork and modify core logicHigh

Build System

Goose uses a Rust workspace structure:
crates/
├── goose/              # Core library
├── goose-cli/          # CLI binary
├── goose-server/       # REST server binary (goosed)
├── goose-acp/          # ACP protocol implementation
├── goose-mcp/          # Built-in MCP servers
└── goose-test/         # Test utilities

ui/desktop/             # Electron application
Key build commands:
cargo build --release              # Build all binaries
just release-binary                # Build + generate OpenAPI spec
cargo test                         # Run all tests
just run-ui                        # Start desktop app

Performance Considerations

Context Management

Goose implements automatic context compaction when messages approach token limits:
// crates/goose/src/context_mgmt/
const DEFAULT_COMPACTION_THRESHOLD: f64 = 0.75;

// Compacts conversation when 75% of context window is used

Tool Call Parallelization

Multiple tool calls in a single model response execute in parallel:
// Agent can execute multiple independent tools simultaneously
// Controlled by tool_call_cut_off parameter

Caching

  • Tool lists cached and versioned for quick access
  • Provider capabilities cached per model
  • Session data persisted to SQLite for fast retrieval
For custom distribution guidance, see the CUSTOM_DISTROS.md guide in the Goose repository.

Next Steps

Agents

Learn about agent orchestration and subagents

Providers

Explore AI provider integration

Extensions

Understand the MCP extension system

Sessions

Discover session management

Build docs developers (and LLMs) love