Skip to main content
docker-agent is configured with a YAML file. You can define one or more agents, the models they use, the tools available to them, and optional features like permissions, hooks, and structured output.

Config file structure

A docker-agent YAML config has these top-level sections:
agent.yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/docker/docker-agent/main/agent-schema.json

# 1. Version — config schema version (v0–v7, current is v7)
version: "7"

# 2. Metadata — for agent distribution via OCI registries
metadata:
  author: my-org
  description: My helpful agent
  version: "1.0.0"

# 3. Models — named model definitions (optional; models can also be referenced inline)
models:
  claude:
    provider: anthropic
    model: claude-sonnet-4-0
    max_tokens: 64000

# 4. Agents — one or more agent definitions (required)
agents:
  root:
    model: claude
    description: A helpful assistant
    instruction: You are helpful.
    toolsets:
      - type: think

# 5. RAG — retrieval-augmented generation sources (optional)
rag:
  docs:
    docs: ["./docs"]
    strategies:
      - type: chunked-embeddings
        model: openai/text-embedding-3-small

# 6. Providers — custom provider definitions (optional)
providers:
  my_provider:
    api_type: openai_chatcompletions
    base_url: https://api.example.com/v1
    token_key: MY_API_KEY

# 7. Permissions — global tool permission rules (optional)
permissions:
  allow: ["read_*"]
  deny: ["shell:cmd=sudo*"]

Minimal config

The simplest possible configuration — a single agent with an inline model reference:
agent.yaml
agents:
  root:
    model: openai/gpt-4o
    description: A helpful assistant
    instruction: You are a helpful assistant.
No version, no models section — just an agent. docker-agent fills in sensible defaults.

Inline vs named models

Models can be referenced inline or defined in the models section.
Use provider/model-name directly on the agent. Quick and concise.
agents:
  root:
    model: openai/gpt-4o
    description: Assistant
    instruction: You are helpful.

Config versioning

docker-agent configs are versioned. The current version is 7. Add it at the top of your config to ensure consistent behavior:
version: "7"
When you load an older config, docker-agent automatically migrates it to the latest schema. Supported versions: 0 through 7.
Model references are case-sensitive. openai/gpt-4o is not the same as openai/GPT-4o.

Validation

docker-agent validates your configuration at startup:
  • All sub_agents must reference agents defined in the config
  • Named model references must exist in the models section
  • Provider names must be valid (openai, anthropic, google, amazon-bedrock, dmr, etc.)
  • Required API key environment variables must be set
  • Tool-specific fields are validated (e.g., path is only valid for the memory toolset)

Environment variables

API keys are read from environment variables — never store secrets in config files.
VariableProvider
OPENAI_API_KEYOpenAI
ANTHROPIC_API_KEYAnthropic
GOOGLE_API_KEYGoogle Gemini
MISTRAL_API_KEYMistral
XAI_API_KEYxAI
NEBIUS_API_KEYNebius
VariableDescription
DOCKER_AGENT_AUTO_INSTALLSet to false to disable automatic tool installation
DOCKER_AGENT_TOOLS_DIROverride the base directory for installed tools (default: ~/.cagent/tools/)

JSON schema

For editor autocompletion and inline validation, add this to the top of your YAML file:
# yaml-language-server: $schema=https://raw.githubusercontent.com/docker/docker-agent/main/agent-schema.json

Metadata section

Optional metadata for agent distribution via OCI registries:
metadata:
  author: my-org
  license: Apache-2.0
  description: A helpful coding assistant
  readme: |
    This agent helps with coding tasks.
  version: "1.0.0"
FieldDescription
authorAuthor or organization name
licenseLicense identifier (e.g., Apache-2.0, MIT)
descriptionShort description for the agent
readmeLonger Markdown description shown in registries
versionSemantic version string

Custom providers section

Define reusable provider configurations for custom or self-hosted endpoints:
providers:
  azure:
    api_type: openai_chatcompletions
    base_url: https://my-resource.openai.azure.com/openai/deployments/gpt-4o
    token_key: AZURE_OPENAI_API_KEY

models:
  azure_gpt:
    provider: azure   # references the custom provider above
    model: gpt-4o

agents:
  root:
    model: azure_gpt
FieldDescription
api_typeAPI schema: openai_chatcompletions (default) or openai_responses
base_urlBase URL for the API endpoint (required)
token_keyEnvironment variable name containing the API token

Configuration reference

Agents

All agent fields: model, instruction, toolsets, sub-agents, hooks, and more.

Models

Provider setup, parameters, thinking budget, and routing.

Tools

Built-in tools, MCP tools, Docker MCP, and tool filtering.

Hooks

Run shell commands at lifecycle events.

Permissions

Control which tools auto-approve, require confirmation, or are blocked.

Routing

Route requests to different models based on message content.

Sandbox

Run agents in an isolated Docker container.

Structured output

Constrain agent responses to a JSON schema.

Build docs developers (and LLMs) love