Skip to main content
Goose server (goosed) provides a REST API for all Goose functionality, enabling web applications, remote clients, and multi-user deployments. This guide covers deployment strategies, configuration, and best practices.

Overview

The goose-server crate provides:
  • REST API: HTTP endpoints for session management, message streaming, and configuration
  • Multi-user support: Isolated sessions per user
  • Extension management: Dynamic MCP server configuration
  • Streaming responses: Server-Sent Events (SSE) for real-time AI responses

Architecture

Quick Start

Running Locally

# Build the server
cargo build --release --package goose-server

# Run the server
./target/release/goosed
The server starts on http://127.0.0.1:3000 by default.

Using Docker

Goose provides a multi-stage Dockerfile for minimal production images:
# Dockerfile (from source repository)
FROM rust:1.82-bookworm AS builder

# Build dependencies and goose-cli (includes server)
RUN cargo build --release --package goose-cli

FROM debian:bookworm-slim

# Runtime dependencies only
RUN apt-get update && apt-get install -y \
    ca-certificates libssl3 libdbus-1-3 curl git

COPY --from=builder /build/target/release/goose /usr/local/bin/goose

ENTRYPOINT ["/usr/local/bin/goose"]
CMD ["--help"]
Build and run:
# Build image
docker build -t goose:latest .

# Run server
docker run -p 3000:3000 \
  -e GOOSE_PROVIDER=anthropic \
  -e ANTHROPIC_API_KEY=your-key \
  goose:latest server

Docker Compose

version: '3.8'

services:
  goose:
    image: goose:latest
    command: server
    ports:
      - "3000:3000"
    environment:
      - GOOSE_HOST=0.0.0.0
      - GOOSE_PORT=3000
      - GOOSE_PROVIDER=anthropic
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
    volumes:
      - goose-data:/home/goose/.local/share/goose
      - goose-config:/home/goose/.config/goose

volumes:
  goose-data:
  goose-config:

Configuration

Environment Variables

VariableDefaultDescription
GOOSE_HOST127.0.0.1Bind address
GOOSE_PORT3000Listen port
GOOSE_PROVIDER(required)AI provider (e.g., anthropic, openai)
GOOSE_MODELProvider defaultModel to use
ANTHROPIC_API_KEY-Anthropic API key
OPENAI_API_KEY-OpenAI API key
GOOSE_DISABLE_TELEMETRYfalseDisable telemetry
The server uses the same configuration system as the CLI. See crates/goose-server/src/configuration.rs for implementation details.

Configuration File

Place config.yaml in ~/.config/goose/ (or use GOOSE_CONFIG_DIR):
GOOSE_PROVIDER: anthropic
GOOSE_MODEL: claude-sonnet-4-20250514

# Custom provider settings
api_timeout: 60
max_retries: 3

API Reference

OpenAPI Specification

The server provides an OpenAPI spec at ui/desktop/openapi.json. Generate it after server changes:
just generate-openapi

Key Endpoints

Session Management

Create Session
POST /sessions
Content-Type: application/json

{
  "cwd": "/path/to/project",
  "name": "My Session"
}
Response:
{
  "id": "session-uuid",
  "name": "My Session",
  "working_dir": "/path/to/project",
  "created_at": "2026-03-04T12:00:00Z"
}
Get Session
GET /sessions/{id}
List Sessions
GET /sessions

Message Streaming

Send Message (Server-Sent Events)
POST /sessions/{id}/messages
Content-Type: application/json

{
  "content": "What files are in this directory?"
}
Response (SSE stream):
event: message_chunk
data: {"type":"text","text":"Let me check"}

event: tool_call
data: {"tool":"list_files","status":"running"}

event: tool_result
data: {"tool":"list_files","result":"..."}

event: message_complete
data: {"stop_reason":"end_turn"}

Extension Management

List Extensions
GET /extensions
Enable Extension
POST /extensions/{name}/enable
Disable Extension
POST /extensions/{name}/disable

Production Deployment

Security Considerations

The server does not include built-in authentication. Deploy behind a reverse proxy with authentication for production use.
Recommended setup:
# nginx.conf
server {
    listen 443 ssl;
    server_name goose.example.com;
    
    ssl_certificate /etc/ssl/certs/goose.crt;
    ssl_certificate_key /etc/ssl/private/goose.key;
    
    # Authentication
    auth_basic "Goose Server";
    auth_basic_user_file /etc/nginx/.htpasswd;
    
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        
        # SSE support
        proxy_set_header Connection '';
        proxy_buffering off;
        proxy_cache off;
        
        # Headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Extension Allowlist

Restrict which MCP servers can be loaded using an allowlist:
# Set allowlist URL
export GOOSE_ALLOWLIST=https://example.com/goose-allowlist.yaml
Allowlist format:
extensions:
  - id: slack
    command: uvx mcp_slack
  - id: github
    command: uvx mcp_github
The allowlist prevents command injection by rejecting additional arguments. See crates/goose-server/ALLOWLIST.md for details.

Resource Limits

Memory: Goose sessions store conversation history in memory. Plan for ~10-50MB per active session. CPU: Primarily bound by AI provider latency. Goose itself is lightweight. Storage: Sessions are persisted to disk:
  • Location: ~/.local/share/goose/sessions/
  • Size: ~1-5KB per message

Monitoring

Goose uses structured logging via tracing. Configure log levels:
export RUST_LOG=info,goose=debug
Log to file:
goosed 2>&1 | tee -a goose-server.log
For production monitoring, integrate with OpenTelemetry (see Telemetry).

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: goose-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: goose
  template:
    metadata:
      labels:
        app: goose
    spec:
      containers:
      - name: goose
        image: goose:latest
        command: ["goose", "server"]
        env:
        - name: GOOSE_HOST
          value: "0.0.0.0"
        - name: GOOSE_PROVIDER
          value: "anthropic"
        - name: ANTHROPIC_API_KEY
          valueFrom:
            secretKeyRef:
              name: goose-secrets
              key: anthropic-api-key
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: sessions
          mountPath: /home/goose/.local/share/goose
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "1Gi"
            cpu: "500m"
      volumes:
      - name: sessions
        persistentVolumeClaim:
          claimName: goose-sessions
---
apiVersion: v1
kind: Service
metadata:
  name: goose-service
spec:
  selector:
    app: goose
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer

Scaling Considerations

Stateful Sessions

Sessions are currently stored per-instance. For multi-instance deployments:
  1. Sticky sessions: Route users to the same instance
  2. Shared storage: Mount a shared filesystem for session data
  3. Session migration: Export/import sessions between instances (future feature)

Database Integration

For persistent session storage, consider implementing a custom session manager:
// Custom session storage (example)
use goose::session::{Session, SessionManager};

impl SessionManager {
    pub async fn save_to_postgres(&self, session: &Session) -> Result<()> {
        // Your database logic
    }
}
See crates/goose/src/session/session_manager.rs for the session storage interface.

Troubleshooting

Server won’t start

# Check configuration
goose configure list

# Verify provider credentials
echo $ANTHROPIC_API_KEY

# Test with minimal config
GOOSE_PROVIDER=anthropic ANTHROPIC_API_KEY=sk-... goosed

Connection timeouts

Increase timeout for slow AI providers:
# config.yaml
api_timeout: 120  # seconds

SSE not working

Ensure your reverse proxy doesn’t buffer responses:
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 300s;

Resources

  • Server implementation: crates/goose-server/src/
  • Route handlers: crates/goose-server/src/routes/
  • OpenAPI spec: ui/desktop/openapi.json
  • Configuration: crates/goose-server/src/configuration.rs

Build docs developers (and LLMs) love