providers section lets you define reusable provider configurations for any OpenAI-compatible API — without modifying docker-agent’s source code. This is useful for self-hosted models, API proxies, enterprise gateways, and any service that implements the OpenAI chat completions API.
If a service supports the
/v1/chat/completions endpoint, you can use it with docker-agent. No source code changes are needed.Provider configuration
Define a provider under the top-levelproviders key:
Provider properties
| Property | Description | Default |
|---|---|---|
api_type | API schema: openai_chatcompletions or openai_responses | openai_chatcompletions |
base_url | Base URL for the API endpoint | — |
token_key | Name of the environment variable containing the API token | — |
API types
openai_chatcompletions— Standard OpenAI Chat Completions API (/v1/chat/completions). Works with most OpenAI-compatible endpoints including vLLM, Ollama, LiteLLM, and Mistral.openai_responses— OpenAI Responses API. Use this for newer OpenAI models that require the Responses API format.
Using a custom provider
Reference the provider by name from a model definition, then reference the model from an agent:Shorthand syntax
Once a provider is defined, you can reference it with the inlineprovider/model shorthand without defining a named model:
base_url, token_key, and api_type are applied automatically.
How it works
When you reference a custom provider, docker-agent:- Applies the provider’s
base_urlto the model (if not already set on the model). - Applies the provider’s
token_keyto the model (if not already set on the model). - Stores the provider’s
api_typeinprovider_opts.api_type. - Routes the request to the appropriate API client.
Examples
Related
- Local models — Ollama, vLLM, and other local servers
- Model configuration — Full model config reference