Skip to main content

Your First AI Agent

Welcome to the first lesson of the AWS Strands course! Here, you’ll learn the fundamentals of creating a simple but powerful AI agent. We’ll build a weather assistant that can understand a question, fetch live data from an external API, and provide a helpful answer. This will introduce you to the core concepts of the Strands SDK:

Agent Creation

How to instantiate an agent

Model Configuration

How to connect to an LLM

Tool Usage

How to give your agent abilities

Key Concepts

1. System Prompt

The system prompt is the agent’s constitution. It’s a detailed set of instructions that defines:
  • The agent’s personality and behavior
  • Its capabilities and limitations
  • The exact steps it should follow
A well-crafted system prompt is crucial for reliable agent behavior. Be specific and detailed!

2. Model Configuration

LiteLLMModel is a bridge to the language model that acts as the agent’s “brain”. Strands uses litellm under the hood, which means you can easily switch between dozens of LLM providers just by changing the model_id.

3. Tool Usage

Tools are the agent’s hands and eyes. By giving the agent the http_request tool, we grant it the ability to access the internet. The agent’s LLM brain knows how to use this tool to follow the instructions in the system prompt.

4. Agent Instantiation

The Agent class brings everything together. We provide it with:
  • System prompt (instructions)
  • Model (the LLM brain)
  • Tools (capabilities)

5. Invocation

Calling the agent is as simple as calling a function: weather_agent(user_query). The agent:
  1. Takes the query
  2. Thinks step-by-step using the LLM
  3. Uses its tools as needed
  4. Returns a final, synthesized answer

Implementation

Step 1: Import Dependencies

import os
from dotenv import load_dotenv
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands_tools import http_request

# Load environment variables from a .env file
load_dotenv()

Step 2: Define the System Prompt

WEATHER_SYSTEM_PROMPT = """You are a friendly and helpful weather assistant with HTTP capabilities.

Your primary function is to provide accurate weather forecasts for locations in the United States by using the National Weather Service API.

Follow these steps to fulfill a user's request:
1. First, if you don't have grid coordinates, use the points API endpoint to get them.
   - For latitude and longitude: https://api.weather.gov/points/{latitude},{longitude}
   - For a US zipcode: https://api.weather.gov/points/{zipcode}
2. The points API will return a `forecast` URL. Use this URL to make a second HTTP request to get the actual weather forecast.
3. Process the forecast data and present it to the user in a clear, easy-to-understand format.

When displaying your response:
- Highlight key information like temperature, precipitation, and any weather alerts.
- Explain technical terms in simple language.
- If you encounter an error, apologize and explain that you couldn't retrieve the weather information.
"""
The system prompt provides step-by-step instructions for using the National Weather Service API. This guides the agent through the two-step process of getting grid coordinates and then fetching the forecast.

Step 3: Create the Agent

def create_weather_agent() -> Agent:
    """
    Creates and configures a weather-focused agent.
    
    Returns:
        An Agent instance configured with a model, system prompt, and tools.
    """
    # Configure the language model (LLM) that will power the agent
    model = LiteLLMModel(
        client_args={
            "api_key": os.getenv("NEBIUS_API_KEY"),
        },
        model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
        params={
            "max_tokens": 1500,
            "temperature": 0.7,
        },
    )
    
    # Create the agent instance
    weather_agent = Agent(
        system_prompt=WEATHER_SYSTEM_PROMPT,
        tools=[http_request],  # Grant the agent HTTP capabilities
        model=model,
    )
    return weather_agent
  • client_args: Dictionary containing API keys and other client-specific settings
  • model_id: The model identifier (format: provider/model-name)
  • params: Model parameters like:
    • max_tokens: Maximum response length
    • temperature: Creativity (0.0 = deterministic, 1.0 = creative)

Step 4: Use the Agent

def main():
    """
    Main function to run the weather agent.
    """
    # Create the weather agent
    weather_agent = create_weather_agent()
    
    # Define a user query
    user_query = "Compare the temperature in New York, NY and Chicago, IL this weekend."
    
    # Invoke the agent with the query and get the response
    print(f"User Query: {user_query}\n")
    response = weather_agent(user_query)
    
    # Print the agent's final response
    print("Weather Agent Response:")
    print(response)

if __name__ == "__main__":
    main()

Running the Example

1

Set up environment

Create a .env file with your API key:
NEBIUS_API_KEY=your_api_key_here
2

Install dependencies

pip install strands strands-tools python-dotenv
3

Run the script

python main.py

Expected Output

The agent will:
  1. Make HTTP requests to the National Weather Service API
  2. Fetch weather data for both New York and Chicago
  3. Compare the temperatures
  4. Return a human-readable summary
User Query: Compare the temperature in New York, NY and Chicago, IL this weekend.

Weather Agent Response:
This weekend, New York City is expected to have temperatures ranging from 
52°F to 61°F, while Chicago will see cooler weather with temperatures between 
45°F and 54°F. New York will be about 7-9 degrees warmer than Chicago.

Try It Yourself

Try asking about different US cities:
user_query = "What's the weather like in San Francisco, CA?"
Try using latitude and longitude:
user_query = "What's the weather at coordinates 40.7128, -74.0060?"
Modify the system prompt to make the agent more concise or more detailed:
WEATHER_SYSTEM_PROMPT = """You are a weather assistant. 
Always respond in exactly 2 sentences: one for temperature, one for conditions.
...
"""
Import additional tools from strands_tools:
from strands_tools import http_request, retrieve

weather_agent = Agent(
    system_prompt=WEATHER_SYSTEM_PROMPT,
    tools=[http_request, retrieve],  # Multiple tools!
    model=model,
)

What You Learned

  • How to create and configure an AI agent with AWS Strands
  • How to connect an agent to an LLM using LiteLLMModel
  • How to give an agent tools like http_request
  • How to write effective system prompts
  • How to invoke an agent and get responses

Next Steps

You’ve created your first AI agent! But it has no memory—it forgets everything after each response. In the next lesson, you’ll learn how to give your agent persistent memory using session management.

Lesson 02: Session Management

Learn how to give your agent memory so it can maintain context across conversations

Resources

Video Tutorial

Watch Lesson 01 on YouTube

Strands Documentation

Read the official docs

Build docs developers (and LLMs) love