Skip to main content

Agent Configuration

The Agent class provides extensive configuration options to customize behavior, performance, and capabilities.

Core Parameters

Model Configuration

model_name
str
default:"gpt-4o-mini"
The name of the language model to use. Supports any model from OpenAI, Anthropic, Groq, Cohere, and more via LiteLLM.
agent = Agent(model_name="gpt-4o")
agent = Agent(model_name="claude-sonnet-4-5")
agent = Agent(model_name="groq/llama-3.1-70b")
llm
Any
default:"None"
Pre-configured LLM instance. If not provided, will be created automatically based on model_name.
from swarms.utils.litellm_wrapper import LiteLLM

llm = LiteLLM(model_name="gpt-4o", temperature=0.5)
agent = Agent(llm=llm)
temperature
float
default:"0.5"
Controls randomness in model outputs (0.0 = deterministic, 1.0 = creative).
# Deterministic for code generation
agent = Agent(model_name="gpt-4o", temperature=0.1)

# Creative for writing
agent = Agent(model_name="gpt-4o", temperature=0.9)
max_tokens
int
default:"4096"
Maximum number of tokens to generate in a single response.
agent = Agent(model_name="gpt-4o", max_tokens=8192)
context_length
int
default:"Based on model"
Maximum context window size. Automatically set based on the model.
agent = Agent(model_name="gpt-4o", context_length=128000)

Agent Identity

agent_name
str
default:"swarm-worker-01"
Unique name for the agent. Used in multi-agent systems and logging.
agent = Agent(
    agent_name="Financial-Analyst",
    model_name="gpt-4o"
)
agent_description
str
default:"Auto-generated"
Description of the agent’s purpose and capabilities.
agent = Agent(
    agent_name="Data-Analyst",
    agent_description="Expert in data analysis, visualization, and statistical modeling",
    model_name="gpt-4o"
)
system_prompt
str
default:"Default system prompt"
The system prompt that defines agent behavior and expertise.
SYSTEM_PROMPT = """
You are a senior software engineer specializing in:
- Clean code architecture
- Test-driven development
- Code review best practices

Provide detailed, well-documented code solutions.
"""

agent = Agent(
    system_prompt=SYSTEM_PROMPT,
    model_name="gpt-4o"
)

Execution Control

max_loops
Union[int, str]
default:"1"
Number of execution loops. Set to “auto” for autonomous mode.
# Single execution
agent = Agent(max_loops=1)

# Multi-step reasoning
agent = Agent(max_loops=5)

# Autonomous mode
agent = Agent(max_loops="auto")
loop_interval
int
default:"0"
Delay in seconds between loops.
agent = Agent(max_loops=5, loop_interval=1)  # 1 second delay
retry_attempts
int
default:"3"
Number of retry attempts for failed LLM calls.
agent = Agent(retry_attempts=5)
retry_interval
int
default:"1"
Delay in seconds between retry attempts.
agent = Agent(retry_attempts=3, retry_interval=2)
timeout
int
default:"None"
Timeout in seconds for agent execution.
agent = Agent(timeout=300)  # 5 minute timeout

Output Configuration

output_type
str
default:"str-all-except-first"
Format for agent output. Options: “str”, “list”, “json”, “dict”, “yaml”, “xml”.
# String output
agent = Agent(output_type="str")

# JSON output
agent = Agent(output_type="json")

# Dictionary output
agent = Agent(output_type="dict")
streaming_on
bool
default:"False"
Enable basic streaming with formatted panels.
agent = Agent(streaming_on=True)
stream
bool
default:"False"
Enable detailed token-by-token streaming with metadata.
agent = Agent(stream=True)
response = agent.run("Tell me a story")  # Streams each token
streaming_callback
Callable
default:"None"
Callback function to receive streaming tokens in real-time.
def on_token(token: str):
    print(f"Token: {token}", end="", flush=True)

agent = Agent(streaming_callback=on_token)
verbose
bool
default:"False"
Enable detailed logging output.
agent = Agent(verbose=True)
print_on
bool
default:"True"
Enable printing of agent responses.
agent = Agent(print_on=True)

Memory and History

return_history
bool
default:"False"
Return full conversation history instead of just final response.
agent = Agent(return_history=True)
response = agent.run("Hello")  # Returns full conversation
user_name
str
default:"Human"
Name to use for user messages in conversation history.
agent = Agent(user_name="John")
dynamic_context_window
bool
default:"True"
Automatically manage context window to prevent overflow.
agent = Agent(dynamic_context_window=True)

Advanced Features

dynamic_temperature_enabled
bool
default:"False"
Randomly adjust temperature between loops for varied outputs.
agent = Agent(dynamic_temperature_enabled=True)
reasoning_prompt_on
bool
default:"True"
Add reasoning prompts to guide multi-step thinking.
agent = Agent(max_loops=5, reasoning_prompt_on=True)
interactive
bool
default:"False"
Enable interactive mode for conversational agents.
agent = Agent(interactive=True)
dashboard
bool
default:"False"
Display agent dashboard on initialization.
agent = Agent(dashboard=True)

State Management

autosave
bool
default:"False"
Automatically save agent state after each execution.
agent = Agent(autosave=True)
saved_state_path
str
default:"Auto-generated"
Path to save agent state.
agent = Agent(
    autosave=True,
    saved_state_path="./states/my_agent.json"
)
load_state_path
str
default:"None"
Path to load previous agent state from.
agent = Agent(load_state_path="./states/my_agent.json")

Reliability

fallback_models
List[str]
default:"None"
List of fallback models to try if primary model fails.
agent = Agent(
    fallback_models=[
        "gpt-4o",
        "gpt-4o-mini",
        "gpt-3.5-turbo"
    ]
)

Performance

mode
str
default:"standard"
Agent execution mode. Options: “interactive”, “fast”, “standard”.
# Fast mode (no printing, minimal overhead)
agent = Agent(mode="fast")

# Interactive mode
agent = Agent(mode="interactive")
top_p
float
default:"0.90"
Nucleus sampling parameter for model generation.
agent = Agent(top_p=0.95)

Example Configurations

Production Agent

production_agent = Agent(
    agent_name="Production-Agent",
    agent_description="Production-ready agent with reliability features",
    model_name="gpt-4o",
    fallback_models=["gpt-4o", "gpt-4o-mini"],
    max_loops=1,
    temperature=0.3,
    max_tokens=4096,
    retry_attempts=5,
    retry_interval=2,
    timeout=300,
    autosave=True,
    verbose=True,
    dynamic_context_window=True,
)

Research Agent

research_agent = Agent(
    agent_name="Research-Agent",
    system_prompt="Expert researcher providing detailed analysis",
    model_name="gpt-4o",
    max_loops=3,
    temperature=0.5,
    reasoning_prompt_on=True,
    verbose=True,
    return_history=True,
)

Fast Batch Processing Agent

batch_agent = Agent(
    agent_name="Batch-Processor",
    model_name="gpt-4o-mini",
    max_loops=1,
    mode="fast",  # Disable printing for performance
    temperature=0.2,
    print_on=False,
    verbose=False,
)

Next Steps

Agent Memory

Configure conversation history and memory

Agent Tools

Add tools to extend capabilities

Reference

Location in source: swarms/structs/agent.py:352-454

Build docs developers (and LLMs) love