Your First AI Agent
Welcome to the first lesson of the AWS Strands course! Here, you’ll learn the fundamentals of creating a simple but powerful AI agent. We’ll build a weather assistant that can understand a question, fetch live data from an external API, and provide a helpful answer. This will introduce you to the core concepts of the Strands SDK:Agent Creation
How to instantiate an agent
Model Configuration
How to connect to an LLM
Tool Usage
How to give your agent abilities
Key Concepts
1. System Prompt
The system prompt is the agent’s constitution. It’s a detailed set of instructions that defines:- The agent’s personality and behavior
- Its capabilities and limitations
- The exact steps it should follow
2. Model Configuration
LiteLLMModel is a bridge to the language model that acts as the agent’s “brain”. Strands uses litellm under the hood, which means you can easily switch between dozens of LLM providers just by changing the model_id.
3. Tool Usage
Tools are the agent’s hands and eyes. By giving the agent thehttp_request tool, we grant it the ability to access the internet. The agent’s LLM brain knows how to use this tool to follow the instructions in the system prompt.
4. Agent Instantiation
TheAgent class brings everything together. We provide it with:
- System prompt (instructions)
- Model (the LLM brain)
- Tools (capabilities)
5. Invocation
Calling the agent is as simple as calling a function:weather_agent(user_query). The agent:
- Takes the query
- Thinks step-by-step using the LLM
- Uses its tools as needed
- Returns a final, synthesized answer
Implementation
Step 1: Import Dependencies
Step 2: Define the System Prompt
The system prompt provides step-by-step instructions for using the National Weather Service API. This guides the agent through the two-step process of getting grid coordinates and then fetching the forecast.
Step 3: Create the Agent
Model Configuration Options
Model Configuration Options
- client_args: Dictionary containing API keys and other client-specific settings
- model_id: The model identifier (format:
provider/model-name) - params: Model parameters like:
max_tokens: Maximum response lengthtemperature: Creativity (0.0 = deterministic, 1.0 = creative)
Step 4: Use the Agent
Running the Example
Expected Output
The agent will:- Make HTTP requests to the National Weather Service API
- Fetch weather data for both New York and Chicago
- Compare the temperatures
- Return a human-readable summary
Try It Yourself
Experiment 1: Change the Location
Experiment 1: Change the Location
Try asking about different US cities:
Experiment 2: Use Coordinates
Experiment 2: Use Coordinates
Try using latitude and longitude:
Experiment 3: Customize the Prompt
Experiment 3: Customize the Prompt
Modify the system prompt to make the agent more concise or more detailed:
Experiment 4: Add More Tools
Experiment 4: Add More Tools
Import additional tools from
strands_tools:What You Learned
- How to create and configure an AI agent with AWS Strands
- How to connect an agent to an LLM using
LiteLLMModel - How to give an agent tools like
http_request - How to write effective system prompts
- How to invoke an agent and get responses
Next Steps
You’ve created your first AI agent! But it has no memory—it forgets everything after each response. In the next lesson, you’ll learn how to give your agent persistent memory using session management.Lesson 02: Session Management
Learn how to give your agent memory so it can maintain context across conversations
Resources
Video Tutorial
Watch Lesson 01 on YouTube
Strands Documentation
Read the official docs