Skip to main content

Extending Agents with External Tools (MCP)

In this lesson, you’ll learn how to make your agent dramatically more powerful by connecting it to external tool servers using the Model Context Protocol (MCP). MCP allows an agent to dynamically discover and use new tools without requiring any changes to its own code. We’ll connect our agent to public servers that provide tools for searching AWS documentation, web search, and accommodation booking, turning our general-purpose agent into a specialized expert.

What You’ll Learn

Dynamic Tool Discovery

How agents discover and use capabilities at runtime

Multi-Server Integration

Connecting to multiple MCP servers simultaneously

Real-time Data Access

Using external tools to access current information

Error Handling

Robust error handling for external connections

Key Concepts

1. MCP Client (MCPClient)

The bridge between your agent and external tool servers. It’s configured to start servers using uvx or npx, which downloads and runs packages without manual installation.
mcp_client = MCPClient(
    lambda: stdio_client(
        StdioServerParameters(
            command="uvx",
            args=["awslabs.aws-documentation-mcp-server@latest"]
        )
    )
)

2. Dynamic Tool Discovery

The magic happens inside the with mcp_client: block. The client:
  1. Starts the server process
  2. Connects to it via stdio
  3. Discovers available tools using mcp_client.list_tools_sync()
  4. Manages the server lifecycle

3. Agent with Dynamic Tools

Tools fetched from servers are passed directly to the Agent constructor. The agent learns about tools and their functions at runtime, without needing prior knowledge.

4. Tool-Powered Expertise

When asked questions, the agent’s LLM brain:
  • Sees available tools from connected servers
  • Intelligently decides which tools to use
  • Finds accurate, up-to-date information
  • Avoids relying on potentially outdated training data

Prerequisites

# For the language model
NEBIUS_API_KEY="your_nebius_api_key"

# For web search capabilities (multiple_mcp.py)
EXA_API_KEY="your_exa_api_key"
# Install dependencies
pip install strands mcp python-dotenv

# Or with uv (faster)
uv pip install strands mcp python-dotenv
  • Python 3.10+
  • Node.js (for npx-based MCP servers)
  • Internet connection (for downloading MCP servers)

Implementation: Single MCP Server

Step 1: Import Dependencies

import os
from dotenv import load_dotenv
from mcp import StdioServerParameters, stdio_client
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands.tools.mcp import MCPClient

load_dotenv()

Step 2: Validate API Key

nebius_api_key = os.getenv("NEBIUS_API_KEY")
if not nebius_api_key:
    raise ValueError("NEBIUS_API_KEY environment variable is required")

Step 3: Configure the Model

model = LiteLLMModel(
    client_args={"api_key": nebius_api_key},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
)

Step 4: Set Up MCP Client

# Set up MCP client to connect to AWS documentation server
mcp_client = MCPClient(
    lambda: stdio_client(
        StdioServerParameters(
            command="uvx",
            args=["awslabs.aws-documentation-mcp-server@latest"]
        )
    )
)
The uvx command automatically downloads and runs the MCP server package. No manual installation needed!

Step 5: Create Agent with MCP Tools

# Create agent with AWS documentation tools
with mcp_client:
    aws_tools = mcp_client.list_tools_sync()
    print(f"Successfully loaded {len(aws_tools)} tools from the MCP server.")
    
    agent = Agent(
        model=model,
        tools=aws_tools,
        system_prompt=(
            "You are an expert on Amazon Web Services. "
            "Use the provided tools to answer questions about AWS services "
            "based on the official documentation. Always provide accurate, "
            "up-to-date information from the AWS docs."
        ),
    )
    
    # Query the agent
    user_query = "What is the maximum invocation payload size for AWS Lambda?"
    print("\n--- Querying AWS Documentation ---")
    print(f"User Query: {user_query}\n")
    
    response = agent(user_query)
    
    print("--- Agent Response ---")
    print(response)

Running the Example

1

Set up environment

Create a .env file:
NEBIUS_API_KEY=your_api_key_here
2

Install dependencies

pip install strands mcp python-dotenv
3

Run the script

python main.py

Expected Output

Successfully loaded 3 tools from the MCP server.

--- Querying AWS Documentation ---
User Query: What is the maximum invocation payload size for AWS Lambda?

--- Agent Response ---
According to the AWS Lambda documentation, the maximum invocation payload 
size (request and response) is 6 MB for synchronous invocations and 256 KB 
for asynchronous invocations.
The agent used the MCP tools to search the AWS documentation and provided an accurate, up-to-date answer!

Multiple MCP Servers

You can connect to multiple MCP servers simultaneously to give your agent access to diverse tools.

Implementation

import os
from dotenv import load_dotenv
from mcp import StdioServerParameters, stdio_client
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands.tools.mcp import MCPClient

load_dotenv()

# Validate required environment variables
required_vars = ["NEBIUS_API_KEY", "EXA_API_KEY"]
missing_vars = [var for var in required_vars if not os.getenv(var)]
if missing_vars:
    raise ValueError(f"Missing required variables: {', '.join(missing_vars)}")

model = LiteLLMModel(
    client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
    model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
)

# MCP Client 1: Web search
web_search_client = MCPClient(
    lambda: stdio_client(
        StdioServerParameters(
            command="npx",
            args=["-y", "@modelcontextprotocol/server-exa"],
            env={"EXA_API_KEY": os.getenv("EXA_API_KEY")}
        )
    )
)

# MCP Client 2: Accommodation booking
accommodation_client = MCPClient(
    lambda: stdio_client(
        StdioServerParameters(
            command="uvx",
            args=["mcp-server-accommodation"]
        )
    )
)

# Combine tools from multiple servers
with web_search_client, accommodation_client:
    web_tools = web_search_client.list_tools_sync()
    accommodation_tools = accommodation_client.list_tools_sync()
    
    all_tools = web_tools + accommodation_tools
    print(f"Loaded {len(all_tools)} tools from {2} MCP servers")
    
    agent = Agent(
        model=model,
        tools=all_tools,
        system_prompt="You are a helpful travel assistant with access to web search and accommodation booking tools.",
    )
    
    # Example queries
    queries = [
        "What's the fastest way to get to Barcelona from London?",
        "What listings are available in Cape Town for 2 people for 3 nights?"
    ]
    
    for query in queries:
        print(f"\n--- Query: {query} ---")
        response = agent(query)
        print(f"Response: {response}\n")

Benefits of MCP

Extensibility

Add new capabilities without code changes

Maintainability

Update tools on servers, agents gain new abilities instantly

Modularity

Mix and match tools from different servers

Real-time Data

Access current information from external sources

Specialization

Transform general agents into domain experts

Reusability

Share MCP servers across multiple agents

Available MCP Servers

Here are some popular MCP servers you can use:
# Search official AWS documentation
command="uvx"
args=["awslabs.aws-documentation-mcp-server@latest"]
# Semantic web search
command="npx"
args=["-y", "@modelcontextprotocol/server-exa"]
env={"EXA_API_KEY": "your_key"}
# Access GitHub repositories and issues
command="npx"
args=["-y", "@modelcontextprotocol/server-github"]
env={"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token"}
# Read and write files
command="npx"
args=["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"]
# Query databases
command="npx"
args=["-y", "@modelcontextprotocol/server-postgres"]
env={"DATABASE_URL": "postgresql://..."}

Error Handling

import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

try:
    # Validate configuration
    if not os.getenv("NEBIUS_API_KEY"):
        raise ValueError("NEBIUS_API_KEY is required")
    
    # Set up MCP client
    with mcp_client:
        tools = mcp_client.list_tools_sync()
        logger.info(f"Loaded {len(tools)} tools")
        
        # Create and use agent
        agent = Agent(model=model, tools=tools)
        response = agent(query)
        
except ConnectionError as e:
    logger.error(f"Failed to connect to MCP server: {e}")
except ValueError as e:
    logger.error(f"Configuration error: {e}")
except Exception as e:
    logger.error(f"Unexpected error: {e}")

Try It Yourself

Connect to the GitHub MCP server:
github_client = MCPClient(
    lambda: stdio_client(
        StdioServerParameters(
            command="npx",
            args=["-y", "@modelcontextprotocol/server-github"],
            env={"GITHUB_PERSONAL_ACCESS_TOKEN": os.getenv("GITHUB_TOKEN")}
        )
    )
)
Tailor the system prompt for specific use cases:
system_prompt = """
You are a DevOps expert with access to AWS documentation and GitHub.
When answering questions:
1. First check AWS docs for service information
2. Then search GitHub for implementation examples
3. Provide complete, actionable answers
"""
See what tools are available:
with mcp_client:
    tools = mcp_client.list_tools_sync()
    for tool in tools:
        print(f"Tool: {tool.name}")
        print(f"Description: {tool.description}")
        print(f"Parameters: {tool.input_schema}\n")

What You Learned

  • How to connect agents to MCP servers for dynamic tool discovery
  • How to use MCPClient to manage external tool servers
  • How to combine tools from multiple MCP servers
  • How to handle errors in MCP connections
  • How to transform general agents into specialized experts

Next Steps

Your agent can now access external tools! But what if you need human oversight for critical decisions? In the next lesson, you’ll learn how to implement human-in-the-loop patterns where agents can pause and request human approval.

Lesson 05: Human-in-the-Loop

Learn how to add human approval and oversight to your agent workflows

Resources

Video Tutorial

Watch Lesson 04 on YouTube

MCP Documentation

Learn more about Model Context Protocol

Build docs developers (and LLMs) love