Settings
The main settings class for Fast Agent configuration.Core Settings
Execution engine for the agent application
Base directory for runtime data. Defaults to
.fast-agentDefault model for agents. Format:
provider.model_name.reasoning_effort or provider.model?reasoning=value.
Examples: "openai.o3-mini.low", "anthropic.claude-sonnet-4-20250514?reasoning=high".
Falls back to FAST_AGENT_MODEL env var, then "gpt-5-mini.low".Model aliases grouped by namespace. Example:
{"$system": {"default": "gpt-5-mini"}}Enable automatic sampling model selection if not explicitly configured
Persist session history in the environment sessions folder
Maximum number of sessions to keep in the rolling window
MCP Configuration
MCP server configuration and settings
Provider Settings
Settings for Anthropic models
Settings for OpenAI models
Settings for OpenAI Responses models
Settings for Open Responses models
Settings for Codex Responses models
Settings for DeepSeek models
Settings for Google models
Settings for xAI Grok models
Settings for generic OpenAI-compatible models (e.g., Ollama)
Settings for OpenRouter models
Settings for Azure OpenAI Service
Settings for Groq models
Settings for TensorZero LLM gateway
Settings for AWS Bedrock models
Settings for HuggingFace inference providers
Logging and Telemetry
Logger configuration for the agent
OpenTelemetry tracing configuration
Skills directory and marketplace configuration
Card pack registry configuration
Shell execution behavior configuration
Provider Settings
AnthropicSettings
Configuration for Anthropic models.Anthropic API key
Override API endpoint
Default model when provider is selected without explicit model
Custom headers for all requests
Caching mode:
off (disabled), prompt (cache tools+system), auto (same as prompt)Cache TTL:
5m (standard) or 1h (extended, additional cost)Reasoning setting. Supports effort strings (adaptive models), budget tokens (int), or toggle (bool). Use 0 or false to disable.
Structured output mode
Built-in web search tool configuration
Built-in web fetch tool configuration
OpenAISettings
Configuration for OpenAI models.OpenAI API key
Override API endpoint
Default model when provider is selected
Custom headers for all requests
Text verbosity level for Responses models
Responses transport mode. Defaults to websocket with SSE fallback.
Responses service tier:
fast (priority) or flexUnified reasoning setting (effort level or budget)
Default reasoning effort
Web search tool configuration
MCP Settings
MCPSettings
Configuration for MCP servers.Dictionary mapping server names to their configurations
MCPServerSettings
Configuration for an individual MCP server.Server name
Server description
Transport mechanism. Auto-inferred from
url or command presence.Command to execute the server (e.g.,
npx)Arguments for the server command
URL for SSE/HTTP transport
HTTP headers for connections
Authentication configuration
Root directories the server has access to
Environment variables for the server process
Working directory for the server command
Whether to connect automatically when agent starts
Whether to include server instructions in system prompt
Whether to automatically reconnect on session termination
Timeout in seconds for the session
Interval for MCP ping requests. Set ≤0 to disable.
Consecutive missed pings before treating connection as failed
Shell Settings
ShellSettings
Configuration for shell execution behavior.Maximum seconds to wait for command output before terminating. Supports duration strings like
"90s", "2m", "1h".Show timeout warnings every N seconds
Use a PTY for interactive prompt shell commands
Maximum shell output lines to display. Set to
None for no limit.Show shell command output on the console
Override model-based output byte limit.
None = auto.Policy when agent shell cwd is missing or invalid
Expose local read_text_file tool (ACP-compatible) when shell runtime is enabled
Control which local file edit tool is exposed:
auto: Usesapply_patchfor GPT-5/Codex models,write_text_fileotherwiseon: Always exposewrite_text_fileapply_patch: Always exposeapply_patchoff: Disable local file edit tools
Logger Settings
LoggerSettings
Configuration for logging and console output.Logger type
Minimum logging level
Enable or disable progress display
Path to log file when
type is 'file'Number of events to accumulate before processing
How often to flush events in seconds
Maximum queue size for event processing
Show User/Assistant chat on console
Show MCP server tool calls on console
Truncate display of long tool calls
Enable markup in console output
Emit OSC 133 prompt marks for terminal scrollbar markers
Streaming renderer for assistant responses
Chat message layout style for console output
Configuration Files
Fast Agent supports layered YAML configuration:- Project config:
fastagent.config.yamlin project root - Environment config:
.fast-agent/fastagent.config.yaml(overrides project) - Secrets:
fastagent.secrets.yaml(merged with config)
Environment Variable Substitution
Use${VAR_NAME} or ${VAR_NAME:default} syntax:
