Skip to main content

Overview

AWS Strands is a comprehensive Python framework for building production-grade AI agents with built-in session management, multi-agent orchestration, and enterprise features like observability and guardrails.

When to Use AWS Strands

  • Production Deployments: Enterprise-ready with observability and monitoring
  • Session Management: Built-in conversation persistence and state management
  • Human-in-the-Loop: Agents that request human input and approval
  • MCP Integration: Native Model Context Protocol support
  • Multi-Agent Systems: Complex workflows with agent handoffs and graphs
  • Guardrails: Safety and content filtering requirements

Installation

pip install strands
pip install strands-tools

Additional Dependencies

# For LiteLLM (most common)
pip install litellm

# For observability
pip install opentelemetry-api opentelemetry-sdk

# For specific providers
pip install anthropic  # For Claude
pip install openai     # For OpenAI

Core Concepts

Agent

The fundamental building block in Strands:
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands_tools import http_request
import os

# Configure model
model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
    params={"max_tokens": 1500, "temperature": 0.7},
)

# Create agent
agent = Agent(
    system_prompt="You are a helpful weather assistant",
    tools=[http_request],
    model=model,
)

# Use agent
response = agent("What's the weather in NYC?")
print(response)
Source: course/aws_strands/01_basic_agent/main.py:40-67

Session Management

Built-in memory for persistent conversations:
from strands import Agent
from strands.session.file_session_manager import FileSessionManager
from pathlib import Path

# Setup session storage
storage_dir = Path("./sessions")
session_manager = FileSessionManager(
    session_id="user_123",
    storage_dir=str(storage_dir),
)

# Agent with memory
agent = Agent(
    model=model,
    session_manager=session_manager,
    system_prompt="You are a friendly assistant",
)

# Conversation is automatically persisted
response1 = agent("My name is Alice")
response2 = agent("What's my name?")  # Remembers "Alice"
Source: course/aws_strands/02_session_management/main.py:22-56

Models

Strands supports multiple model providers via LiteLLM:
from strands.models.litellm import LiteLLMModel
import os

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
    params={
        "max_tokens": 1500,
        "temperature": 0.7,
    },
)

Tools

Strands provides built-in tools and supports custom tools:
from strands_tools import http_request, python_repl, brave_search

agent = Agent(
    system_prompt="You can make HTTP requests and run Python code",
    tools=[http_request, python_repl],
    model=model,
)

Common Patterns

Pattern 1: Basic Agent with Tools

Simplest pattern for tool-using agents:
import os
from dotenv import load_dotenv
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands_tools import http_request

load_dotenv()

WEATHER_PROMPT = """
You are a weather assistant with HTTP capabilities.
Use the National Weather Service API:
1. Get coordinates: https://api.weather.gov/points/{latitude},{longitude}
2. Use returned forecast URL to get weather data
3. Present information in a clear, user-friendly format
"""

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
    params={"max_tokens": 1500, "temperature": 0.7},
)

agent = Agent(
    system_prompt=WEATHER_PROMPT,
    tools=[http_request],
    model=model,
)

response = agent("Compare temperature in New York and Chicago")
print(response)
Based on: starter_ai_agents/aws_strands_starter/main.py

Pattern 2: Persistent Conversations

Add memory to agents for context retention:
1

Import Session Manager

from strands.session.file_session_manager import FileSessionManager
from pathlib import Path
2

Configure Storage

base_dir = Path(__file__).parent.resolve()
storage_dir = base_dir / "tmp" / "sessions"

session_manager = FileSessionManager(
    session_id="user_alice_123",
    storage_dir=str(storage_dir),
)
3

Create Agent with Memory

agent = Agent(
    model=model,
    session_manager=session_manager,
    system_prompt="You are a friendly assistant",
)
4

Use Across Sessions

# First session
print("User: My name is Alice")
response1 = agent("My name is Alice")
print(f"Agent: {response1}")

# Later session (same session_id)
print("\nUser: What's my name?")
response2 = agent("What's my name?")
print(f"Agent: {response2}")  # Remembers "Alice"
Source: course/aws_strands/02_session_management/main.py

Pattern 3: Human-in-the-Loop

Agents that request human input or approval:
from strands import Agent, tool
from strands.models.litellm import LiteLLMModel

@tool
def request_human_input(question: str) -> str:
    """
    Ask the human user for input.
    
    Args:
        question: What to ask the user
    
    Returns:
        User's response
    """
    print(f"\n🤖 Agent asks: {question}")
    return input("👤 Your answer: ")

@tool
def request_approval(action: str, details: str) -> bool:
    """
    Request human approval before taking an action.
    
    Args:
        action: Action to perform
        details: Details about the action
    
    Returns:
        True if approved, False otherwise
    """
    print(f"\n🤖 Agent requests approval:")
    print(f"   Action: {action}")
    print(f"   Details: {details}")
    response = input("👤 Approve? (yes/no): ")
    return response.lower() in ['yes', 'y']

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
)

agent = Agent(
    system_prompt="""
    You help users complete tasks, but must:
    1. Ask for clarification when needed using request_human_input
    2. Get approval before sensitive actions using request_approval
    """,
    tools=[request_human_input, request_approval],
    model=model,
)

response = agent("Send an email to the team about the project update")
Inspired by: Course Lesson 5 - Human-in-the-Loop Agent

Pattern 4: Multi-Agent with Handoffs

Agents that can hand off work to specialized agents:
from strands import Agent, tool
from strands.models.litellm import LiteLLMModel
import os

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
)

# Specialized agents
tech_agent = Agent(
    system_prompt="You are a technical support specialist.",
    model=model,
)

billing_agent = Agent(
    system_prompt="You are a billing specialist.",
    model=model,
)

# Tools for handoff
@tool
def handoff_to_tech(issue: str) -> str:
    """Transfer to technical support."""
    return tech_agent(f"Handle this tech issue: {issue}")

@tool
def handoff_to_billing(query: str) -> str:
    """Transfer to billing department."""
    return billing_agent(f"Handle this billing query: {query}")

# Orchestrator agent
triage_agent = Agent(
    system_prompt="""
    You are a customer service triage agent.
    Route technical issues to tech support.
    Route billing questions to billing.
    """,
    tools=[handoff_to_tech, handoff_to_billing],
    model=model,
)

response = triage_agent("My account was charged twice")
Inspired by: Course Lesson 6.2 - Swarm Agent Pattern

Pattern 5: MCP Integration

Connect to external tools via Model Context Protocol:
import os
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands.tools.mcp import MCPToolkit

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
)

# Connect to MCP server (e.g., GitHub)
mcp_toolkit = MCPToolkit(
    server_command="npx",
    server_args=["-y", "@modelcontextprotocol/server-github"],
    env_vars={
        "GITHUB_PERSONAL_ACCESS_TOKEN": os.getenv("GITHUB_TOKEN")
    }
)

agent = Agent(
    system_prompt="You can interact with GitHub repositories",
    tools=[mcp_toolkit],
    model=model,
)

response = agent("List recent issues in owner/repo")
Inspired by: Course Lesson 4 - MCP Agent

AWS Strands Course

The repository includes a comprehensive 8-lesson course:
1

Lesson 1: Basic Agent

Create your first agent with toolsLocation: course/aws_strands/01_basic_agent/
2

Lesson 2: Session Management

Add memory and conversation persistenceLocation: course/aws_strands/02_session_management/
3

Lesson 3: Structured Output

Extract structured data with PydanticLocation: course/aws_strands/03_structured_output/
4

Lesson 4: MCP Integration

Connect to external tools via MCPLocation: course/aws_strands/04_mcp_agent/
5

Lesson 5: Human-in-the-Loop

Request human input and approvalLocation: course/aws_strands/05_human_in_the_loop_agent/
6

Lesson 6: Multi-Agent Patterns

  • Agent as Tools
  • Swarm Pattern
  • Graph-based Workflows
  • Sequential Pipelines
Location: course/aws_strands/06_multi_agent_pattern/
7

Lesson 7: Observability

OpenTelemetry and Langfuse monitoringLocation: course/aws_strands/07_observability/
8

Lesson 8: Guardrails

Safety measures and content filteringLocation: course/aws_strands/08_guardrails/

Real Examples from Repository

Weather Agent

Basic agent with HTTP tools for weather queries

Complete Course

8-lesson progressive course from basics to production

Configuration

Environment Variables

# .env file
NEBIUS_API_KEY=your_nebius_api_key
OPENAI_API_KEY=your_openai_api_key  # If using OpenAI
ANTHROPIC_API_KEY=your_anthropic_api_key  # If using Claude

# For MCP servers
GITHUB_PERSONAL_ACCESS_TOKEN=your_github_token

Agent Parameters

system_prompt
string
required
Instructions defining the agent’s behavior and capabilities
model
Model
required
LiteLLMModel or other model instance
tools
list[Tool]
default:"[]"
Functions and tools available to the agent
session_manager
SessionManager
For conversation persistence (FileSessionManager, etc.)

LiteLLMModel Parameters

client_args
dict
required
Configuration including api_key
model_id
string
required
Model identifier (e.g., “nebius/deepseek-ai/DeepSeek-V3-0324”)
params
dict
default:"{}"
Model parameters like max_tokens, temperature

Best Practices

Always use session managers for multi-turn conversations:
# ✓ Good: Persistent memory
session_manager = FileSessionManager(
    session_id=f"user_{user_id}",
    storage_dir="./sessions"
)
agent = Agent(model=model, session_manager=session_manager)

# ✗ Bad: No memory between calls
agent = Agent(model=model)  # Forgets after each call
Write detailed system prompts with:
  • Agent’s role and capabilities
  • How to use tools
  • Output format expectations
  • Error handling guidance
system_prompt = """
You are a weather assistant with HTTP capabilities.

When retrieving weather:
1. First get coordinates: https://api.weather.gov/points/{lat},{lon}
2. Use the forecast URL from the response
3. Format weather data in a clear, readable way
4. Handle errors gracefully

Always explain weather conditions in user-friendly terms.
"""
Request approval before:
  • Sending emails or messages
  • Making purchases or payments
  • Deleting or modifying data
  • External API calls with side effects
@tool
def request_approval(action: str) -> bool:
    print(f"Agent wants to: {action}")
    return input("Approve? (yes/no): ").lower() == 'yes'
Leverage MCP servers for:
  • GitHub operations
  • Database queries
  • File system access
  • Custom enterprise tools
This provides standardized, secure tool access.

Troubleshooting

  • Verify storage_dir exists and is writable
  • Use same session_id across calls
  • Check session files are being created
  • Ensure session_manager is passed to Agent
# Debug: Check session file
import os
session_file = f"{storage_dir}/{session_id}.json"
print(f"Session file exists: {os.path.exists(session_file)}")
  • Verify tool docstring is clear and descriptive
  • Check system_prompt mentions when to use the tool
  • Test tool function works independently
  • Enable verbose logging to see tool consideration
  • Verify npm package is installed: npx -y @modelcontextprotocol/server-github
  • Check environment variables are set correctly
  • Test MCP server independently
  • Review server command and args
Verify LiteLLM model ID format:
# Correct formats:
"nebius/deepseek-ai/DeepSeek-V3-0324"  # Nebius
"gpt-4-turbo-preview"  # OpenAI
"anthropic/claude-3-5-sonnet-20241022"  # Anthropic

# Check API key is set
import os
print(f"API key set: {bool(os.getenv('NEBIUS_API_KEY'))}")

Next Steps

Take the Course

Work through all 8 lessons for comprehensive understanding

Session Management

Implement persistent memory for your agents

Multi-Agent Patterns

Explore agent handoffs, swarms, and graph workflows

Observability

Integrate OpenTelemetry and Langfuse for production monitoring

Build docs developers (and LLMs) love