Skip to main content
MCP Prompts allow you to use pre-defined prompt templates provided by MCP servers. These prompts can include structured messages, resources, and arguments, making it easy to leverage domain-specific expertise packaged by MCP server developers.

Overview

Prompts are templates that servers expose through the Model Context Protocol. They provide:
  • Structured conversations: Pre-formatted message sequences
  • Resource inclusion: Embedded files, images, or data
  • Parameterization: Dynamic content through arguments
  • Domain expertise: Best practices encoded by server authors

Quick Start

Applying a Prompt

Use apply_prompt() to load and execute a prompt from an MCP server:
import asyncio
from fast_agent import FastAgent

fast = FastAgent("Prompt Example")

@fast.agent(
    instruction="You are a helpful assistant",
    servers=["prompts"]  # MCP server providing prompts
)
async def main():
    async with fast.run() as agent:
        # Apply a prompt with arguments
        response = await agent.apply_prompt(
            "simple",
            {"name": "Alice"}
        )
        print(response)

if __name__ == "__main__":
    asyncio.run(main())

Interactive Prompt Selection

In interactive mode, use the /prompt command:
> /prompt
# Lists available prompts from all connected servers

> /prompt simple name=Alice
# Applies prompt with arguments

Prompt Modes

Prompts can be applied in two modes:

Template Mode (Persistent)

# Prompt is added to agent's context permanently
response = await agent.apply_prompt(
    "simple",
    {"name": "Alice"},
    as_template=True  # Default
)
Template mode is useful for:
  • Configuring agent behavior
  • Adding domain knowledge
  • Setting persistent context
The prompt remains in history even with use_history=False.

One-Shot Mode (Transient)

# Prompt is used once and not retained
response = await agent.apply_prompt(
    "simple",
    {"name": "Alice"},
    as_template=False
)
One-shot mode executes the prompt immediately and returns the response, but doesn’t add it to the agent’s persistent context.

Working with Arguments

Simple Arguments

# Single argument
await agent.apply_prompt("greeting", {"name": "Bob"})

# Multiple arguments
await agent.apply_prompt(
    "personalized_report",
    {
        "user_name": "Alice",
        "report_type": "monthly",
        "date": "2024-03"
    }
)

Prompts with Resources

Prompts can include embedded resources:
@fast.agent(servers=["prompts"])
async def main():
    async with fast.run() as agent:
        # Prompt includes attached files/resources
        response = await agent.apply_prompt("with_attachment")
The prompt server handles resource loading automatically.

Namespaced Prompts

When multiple servers provide prompts, use namespacing:
# Specify server namespace explicitly
response = await agent.apply_prompt(
    "simple",
    namespace="prompts"
)

# Or use namespaced format
response = await agent.apply_prompt("prompts:simple")

Creating Custom Prompts

Prompt Server Configuration

Define prompts in your MCP server:
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Custom Prompts")

@mcp.prompt()
async def code_review():
    """Prompt for code review assistance"""
    return [
        {
            "role": "user",
            "content": {
                "type": "text",
                "text": "Review this code for best practices and potential issues."
            }
        }
    ]

@mcp.prompt()
async def bug_analysis(severity: str = "medium"):
    """Analyze a bug report"""
    return [
        {
            "role": "user",
            "content": {
                "type": "text",
                "text": f"Analyze this {severity} severity bug and suggest fixes."
            }
        }
    ]

File-Based Prompts

Store prompts as files for easier management:
# prompts/greeting.yaml
name: greeting
description: A friendly greeting prompt
arguments:
  - name: user_name
    type: string
    required: true
messages:
  - role: user
    content: "Hello {{user_name}}! How can I help you today?"

Multi-Turn Prompts

Prompts can define complete conversation sequences:
@mcp.prompt()
async def multiturn():
    """A multi-turn conversation template"""
    return [
        {
            "role": "user",
            "content": {"type": "text", "text": "What is Python?"}
        },
        {
            "role": "assistant",
            "content": {"type": "text", "text": "Python is a programming language..."}
        },
        {
            "role": "user",
            "content": {"type": "text", "text": "Show me an example."}
        }
    ]
Usage:
response = await agent.apply_prompt("multiturn")
# Agent receives the entire conversation context

Prompt Discovery

List Available Prompts

# In code
from fast_agent.mcp import list_prompts

prompts = await list_prompts(agent)
for prompt in prompts:
    print(f"{prompt.name}: {prompt.description}")

Interactive Discovery

# In interactive mode
> /prompt

Available prompts:
1. simple - A simple greeting prompt
2. code_review - Review code for quality
3. bug_analysis - Analyze bug reports

Select prompt number or type /prompt <name>

Integration Patterns

Fine-Tuned Responders

Combine prompts with use_history=False for specialized agents:
@fast.agent(
    name="specialist",
    instruction="Base instruction",
    use_history=False,  # No conversation history
    servers=["prompts"]
)
async def main():
    async with fast.run() as agent:
        # Apply domain-specific prompt as template
        await agent.specialist.apply_prompt(
            "domain_expertise",
            as_template=True
        )
        
        # Agent now acts as fine-tuned specialist
        result = await agent.specialist("Analyze this case")

Workflow Initialization

@fast.agent(name="initialized", servers=["prompts"])
async def workflow():
    async with fast.run() as agent:
        # Initialize agent with prompt
        await agent.initialized.apply_prompt("setup_context")
        
        # Continue with initialized context
        await agent.initialized.interactive()

Chain with Prompts

@fast.agent(name="agent1", servers=["prompts"])
@fast.agent(name="agent2", servers=["prompts"])
@fast.chain(
    name="chain",
    sequence=["agent1", "agent2"]
)
async def main():
    async with fast.run() as agent:
        # Initialize each agent with specific prompts
        await agent.agent1.apply_prompt("step1_setup")
        await agent.agent2.apply_prompt("step2_setup")
        
        # Run chain with initialized context
        await agent.chain("Process this data")

Best Practices

Choose clear, descriptive names for prompts:
# Good
code_review_python
bug_analysis_security
test_case_generation

# Avoid
prompt1
test
temp
Provide clear descriptions for all arguments:
@mcp.prompt()
async def generate_report(
    report_type: str,  # "daily", "weekly", or "monthly"
    include_charts: bool = True,  # Include visualizations
    format: str = "pdf"  # Output format: "pdf", "html", or "md"
):
    ...
Consider versioning for breaking changes:
@mcp.prompt(name="analysis_v2")
async def analysis_v2(...):
    """Updated analysis with improved structure"""
    ...
Test prompts with various argument combinations:
# Test with minimal args
await agent.apply_prompt("report", {"type": "daily"})

# Test with all args
await agent.apply_prompt("report", {
    "type": "daily",
    "include_charts": False,
    "format": "html"
})

Advanced Usage

Prompt Result Objects

Work with prompt results directly:
from fast_agent.mcp import get_prompt

# Get prompt without applying
prompt_result = await get_prompt(agent, "simple", {"name": "Alice"})

# Inspect prompt structure
print(prompt_result.description)
print(prompt_result.messages)

# Apply later
response = await agent.apply_prompt(prompt_result, as_template=True)

Dynamic Prompt Selection

def select_prompt(task_type: str) -> str:
    """Select appropriate prompt based on task"""
    prompts = {
        "code": "code_review",
        "bug": "bug_analysis",
        "test": "test_generation"
    }
    return prompts.get(task_type, "general")

@fast.agent(servers=["prompts"])
async def main():
    async with fast.run() as agent:
        task = "code"
        prompt_name = select_prompt(task)
        response = await agent.apply_prompt(prompt_name)

Prompt Composition

# Apply multiple prompts to build complex context
await agent.apply_prompt("domain_knowledge", as_template=True)
await agent.apply_prompt("coding_standards", as_template=True)
await agent.apply_prompt("project_context", as_template=True)

# Agent now has rich, composed context
result = await agent("Review this pull request")

Troubleshooting

Ensure the MCP server is properly configured:
# fastagent.config.yaml
mcp:
  servers:
    prompts:
      command: "uvx"
      args: ["mcp-server-prompts"]
Check server connection:
fast-agent go --server prompts
> /tools list
Check prompt definition for required arguments:
# List prompt details
prompt = await get_prompt(agent, "prompt_name")
print(prompt.arguments)
Use explicit namespacing when multiple servers provide same prompt:
# Explicit namespace
await agent.apply_prompt("report", namespace="server1")
await agent.apply_prompt("report", namespace="server2")

Multimodal Support

Include images, PDFs, and videos in prompts

MCP Servers

Configure and manage MCP servers

Sampling

Use prompts with sampling requests

Skills

Package prompts as reusable skills

Build docs developers (and LLMs) love