Skip to main content

Overview

The Conversation class manages conversation history for agents, allowing for addition, deletion, and retrieval of messages. It supports saving and loading in JSON/YAML formats, automatic token counting, and dynamic context window management.

Installation

pip install -U swarms

Attributes

id
str
default:"auto-generated"
Unique identifier for the conversation
name
str
default:"conversation-test"
Name of the conversation
system_prompt
Optional[str]
default:"None"
The system prompt for the conversation
time_enabled
bool
default:"False"
Flag to enable time tracking for messages
autosave
bool
default:"False"
Flag to enable automatic saving of conversation history
save_filepath
str
default:"None"
File path for saving the conversation history
context_length
int
default:"8192"
Maximum number of tokens allowed in the conversation history
user
str
default:"User"
The user identifier for messages
token_count
bool
default:"False"
Flag to enable token counting for messages
export_method
str
default:"json"
Export format: “json” or “yaml”
dynamic_context_window
bool
default:"True"
Enable dynamic context window management
caching
bool
default:"True"
Enable conversation caching

Methods

add()

Add a message to the conversation history.
def add(
    self,
    role: str,
    content: Union[str, dict, list, Any],
    metadata: Optional[dict] = None,
    category: Optional[str] = None,
)
Parameters:
  • role (str): The role of the speaker (e.g., ‘User’, ‘System’, ‘Agent’)
  • content (Union[str, dict, list]): The content of the message
  • metadata (Optional[dict]): Optional metadata for the message
  • category (Optional[str]): Optional category for the message (e.g., ‘input’, ‘output’)

return_history_as_string()

Return the conversation history as a formatted string.
def return_history_as_string(self) -> str:
Returns: String representation of the conversation history

export()

Export the conversation to a file based on the export method.
def export(self, force: bool = True)
Parameters:
  • force (bool): If True, saves regardless of autosave setting

load()

Load conversation history from a file (auto-detects format).
def load(self, filename: str)
Parameters:
  • filename (str): Path to the file to load from
Search for messages containing a keyword.
def search(self, keyword: str) -> list
Parameters:
  • keyword (str): The keyword to search for
Returns: List of messages containing the keyword

truncate_memory_with_tokenizer()

Truncate conversation history based on token count using tokenizer.
def truncate_memory_with_tokenizer(self)

export_and_count_categories()

Export all messages with category ‘input’ and ‘output’ and count their tokens.
def export_and_count_categories(self) -> Dict[str, int]
Returns: Dictionary with input_tokens, output_tokens, and total_tokens

Usage Examples

Basic Usage

from swarms.structs import Conversation

# Create a conversation
conversation = Conversation(
    name="my-conversation",
    system_prompt="You are a helpful assistant.",
    time_enabled=True,
    autosave=True,
    token_count=True,
    context_length=8192
)

# Add messages
conversation.add("user", "Hello, how are you?")
conversation.add("assistant", "I am doing well, thanks.")
conversation.add("user", "What is the weather in Tokyo?")

# Get conversation as string
print(conversation.return_history_as_string())

Export and Load

# Export to JSON
conversation.export_method = "json"
conversation.export()

# Load from file
conversation = Conversation.load_conversation(
    name="my-conversation",
    load_filepath="conversation_my-conversation.json"
)

Token Management

# Enable token counting and context management
conversation = Conversation(
    token_count=True,
    context_length=4096,
    dynamic_context_window=True
)

# Add messages with categories for tracking
conversation.add("user", "My input", category="input")
conversation.add("assistant", "My response", category="output")

# Count tokens by category
tokens = conversation.export_and_count_categories()
print(f"Input tokens: {tokens['input_tokens']}")
print(f"Output tokens: {tokens['output_tokens']}")
print(f"Total tokens: {tokens['total_tokens']}")

Search and Query

# Search for messages
results = conversation.search("weather")

# Get last message
last_message = conversation.get_last_message_as_string()

# Get specific message by index
message = conversation.query(0)

Features

  • Automatic Saving: Enable autosave to automatically persist conversation history
  • Token Management: Track token counts and automatically truncate based on context length
  • Multiple Export Formats: Save as JSON or YAML
  • Dynamic Context Window: Automatically manage conversation length to fit context limits
  • Message Search: Search through conversation history by keyword
  • Categorization: Tag messages with categories for organized tracking
  • Time Tracking: Optionally track timestamps for all messages

Build docs developers (and LLMs) love