Skip to main content

Overview

The Prompt APIs provide powerful tools for constructing conversation messages with rich MCP content including text, images, and resources. These APIs simplify working with multi-modal content in agent conversations.

Prompt Class

Creating Messages

The Prompt class provides static methods to create messages with various content types.
from fast_agent import Prompt

# Create user message
user_msg = Prompt.user("What files are in this directory?")

# Create assistant message
assistant_msg = Prompt.assistant("Here are the files...")

# Create message with explicit role
msg = Prompt.message("System instructions", role="user")

Prompt.user()

Create a user message with various content types.
# Simple text
user_msg = Prompt.user("Hello, can you help me?")

# With image
user_msg = Prompt.user(
    "What's in this image?",
    Path("photo.jpg")
)

# With resource
user_msg = Prompt.user(
    "Review this code",
    Path("script.py")
)

# Multiple content items
user_msg = Prompt.user(
    "Analyze these files:",
    Path("data1.csv"),
    Path("data2.csv")
)
content_items
str | Path | bytes | dict | ContentBlock | ...
Content items to include in the message. Accepts:
  • Strings (converted to TextContent)
  • Image file paths (converted to ImageContent)
  • Other file paths (converted to EmbeddedResource)
  • ContentBlock objects
  • ResourceContents objects
  • ReadResourceResult objects
  • PromptMessage or PromptMessageExtended objects
message
PromptMessageExtended
User message with provided content

Prompt.assistant()

Create an assistant message with optional tool calls and stop reason.
# Simple response
assistant_msg = Prompt.assistant("The answer is 42")

# With stop reason
assistant_msg = Prompt.assistant(
    "I need more information",
    stop_reason="end_turn"
)

# With tool calls
assistant_msg = Prompt.assistant(
    "Calling read_file tool",
    tool_calls=tool_calls_dict
)

# With phase information
assistant_msg = Prompt.assistant(
    "Processing...",
    phase="thinking"
)
content_items
str | Path | bytes | dict | ContentBlock | ...
Content items to include in the message
stop_reason
LlmStopReason | None
Reason the assistant stopped generating (e.g., “end_turn”, “max_tokens”)
tool_calls
dict[str, CallToolRequest] | None
Tool calls made by the assistant
phase
AssistantMessagePhase | None
Message phase (e.g., “thinking”, “responding”)
message
PromptMessageExtended
Assistant message with provided content and metadata

Prompt.message()

Create a message with explicit role specification.
# User message
msg = Prompt.message("Hello", role="user")

# Assistant message
msg = Prompt.message("Hi there!", role="assistant")

# With multiple content items
msg = Prompt.message(
    "System configuration:",
    Path("config.json"),
    role="user"
)
content_items
str | Path | bytes | dict | ContentBlock | ...
Content items to include in the message
role
'user' | 'assistant'
default:"user"
Role for the message
message
PromptMessageExtended
Message with specified role and content

Prompt.conversation()

Create a conversation from multiple messages.
conversation = Prompt.conversation(
    Prompt.user("What is the capital of France?"),
    Prompt.assistant("The capital of France is Paris."),
    Prompt.user("What about Germany?")
)
messages
PromptMessageExtended | dict | list
Messages to include in the conversation. Accepts:
  • PromptMessageExtended objects
  • PromptMessage objects
  • Message dictionaries with ‘role’ and ‘content’ keys
  • Lists of messages
conversation
list[PromptMessage]
List of prompt messages forming a conversation

Loading Prompts

load_prompt()

Load a prompt from a file with full conversation state.
from fast_agent import load_prompt
from pathlib import Path

# Load from JSON file (preserves tool_calls, channels, etc.)
messages = load_prompt(Path("conversation.json"))

# Load from template file (supports resources)
messages = load_prompt(Path("prompt_template.txt"))
file
Path | str
required
Path to the prompt file. Supports:
  • .json files: Full serialization format
  • Other files: Template-based format with resource loading
messages
list[PromptMessageExtended]
List of messages with full conversation state

JSON Format

JSON files preserve complete conversation state including tool calls, stop reasons, and custom channels:
[
  {
    "role": "user",
    "content": [
      {"type": "text", "text": "Hello"}
    ]
  },
  {
    "role": "assistant",
    "content": [
      {"type": "text", "text": "Hi there!"}
    ],
    "stop_reason": "end_turn",
    "tool_calls": null
  }
]

Template Format

Template files use delimiters for role sections and support resource loading:
---user---
Please review this code:

@include code.py

---assistant---
Here's my review...
Resources referenced with @include are automatically loaded and embedded.

History Management

load_history_into_agent()

Load conversation history directly into an agent without triggering LLM calls.
from fast_agent.mcp.prompts.prompt_load import load_history_into_agent
from pathlib import Path

# Restore saved conversation
notice = load_history_into_agent(
    agent=my_agent,
    file_path=Path("saved_conversation.json")
)

if notice:
    print(f"Notice: {notice}")

# Continue conversation
response = await my_agent.run("Continue from where we left off")
agent
AgentProtocol
required
Agent instance to restore history into
file_path
Path
required
Path to saved history file (JSON or template format)
notice
str | None
Optional notice string if usage state cannot be fully restored
The agent’s history is cleared before loading. Provider diagnostic history will be updated on the next API call.

Content Helpers

Utility functions for working with MCP content.

get_text()

Extract text from a content block.
from fast_agent.mcp import get_text

text = get_text(content_block)
if text:
    print(text)
content
ContentBlock
required
Content block to extract text from
text
str | None
Extracted text, or None if content has no text

get_image_data()

Extract image data (base64) from a content block.
from fast_agent.mcp import get_image_data

image_data = get_image_data(content_block)
if image_data:
    # image_data is base64 encoded
    save_image(image_data)
content
ContentBlock
required
Content block to extract image from
data
str | None
Base64-encoded image data, or None if not an image

get_resource_uri()

Extract resource URI from an embedded resource.
from fast_agent.mcp import get_resource_uri

uri = get_resource_uri(content_block)
if uri:
    print(f"Resource URI: {uri}")
content
ContentBlock
required
Content block to extract URI from
uri
str | None
Resource URI, or None if not an embedded resource

Type Guards

Type checking functions for content blocks:
from fast_agent.mcp import (
    is_text_content,
    is_image_content,
    is_resource_content,
    is_resource_link
)

if is_text_content(block):
    text = block.text
elif is_image_content(block):
    data = block.data
elif is_resource_content(block):
    uri = block.resource.uri
elif is_resource_link(block):
    link_uri = block.uri
Create resource links for external content. Create a generic resource link with automatic MIME type inference.
from fast_agent.types import resource_link

link = resource_link(
    url="https://example.com/document.pdf",
    name="Product Manual",
    description="User guide for Product X"
)

msg = Prompt.user("Review this document:", link)
url
str
required
URL to the resource
name
str | None
Resource name (defaults to filename from URL)
mime_type
str | None
MIME type (inferred from extension if not provided)
description
str | None
Description of the resource
Resource link object
Create a resource link for an image URL.
from fast_agent.types import image_link

link = image_link(
    url="https://example.com/photo.jpg",
    description="Product photo"
)

msg = Prompt.user("Describe this image:", link)
url
str
required
URL to the image
name
str | None
Image name (defaults to filename)
mime_type
str | None
MIME type (defaults to image/jpeg)
description
str | None
Description of the image
Resource link with image MIME type
Create a resource link for a video URL.
from fast_agent.types import video_link

link = video_link(
    url="https://youtube.com/watch?v=abc123",
    description="Tutorial video"
)
url
str
required
URL to the video (supports YouTube URLs)
name
str | None
Video name (defaults to filename)
mime_type
str | None
MIME type (defaults to video/mp4)
description
str | None
Description of the video
Resource link with video MIME type
Create a resource link for an audio URL.
from fast_agent.types import audio_link

link = audio_link(
    url="https://example.com/podcast.mp3",
    description="Episode 42"
)
url
str
required
URL to the audio file
name
str | None
Audio name (defaults to filename)
mime_type
str | None
MIME type (defaults to audio/mpeg)
description
str | None
Description of the audio
Resource link with audio MIME type
Search and filter messages in conversations.

search_messages()

Search messages by role or content.
from fast_agent.types import search_messages

# Find all user messages
user_messages = search_messages(
    messages=conversation,
    role="user"
)

# Find messages containing specific text
matches = search_messages(
    messages=conversation,
    content_pattern="error"
)
messages
list[PromptMessageExtended]
required
Messages to search through
role
'user' | 'assistant' | None
Filter by message role
content_pattern
str | None
Search pattern for message content
matches
list[PromptMessageExtended]
Messages matching the search criteria

extract_first() / extract_last()

Extract first or last message from a list.
from fast_agent.types import extract_first, extract_last

# Get first user message
first_user = extract_first(messages, role="user")

# Get last assistant message
last_assistant = extract_last(messages, role="assistant")
messages
list[PromptMessageExtended]
required
Messages to search
role
'user' | 'assistant' | None
Filter by message role
message
PromptMessageExtended | None
First or last matching message, or None if not found

Example: Complete Prompt Workflow

from fast_agent import Prompt, load_prompt
from fast_agent.types import image_link, resource_link
from pathlib import Path

# Create a multi-modal conversation
conversation = [
    Prompt.user(
        "Please analyze this code and image:",
        Path("script.py"),
        image_link("https://example.com/diagram.png")
    ),
    Prompt.assistant(
        "I can see the code implements...",
        stop_reason="end_turn"
    ),
    Prompt.user("What about this document?", resource_link(
        "https://example.com/spec.pdf",
        description="Technical specification"
    ))
]

# Use in agent
response = await agent.run(conversation)

# Save conversation
agent.save_history(Path("conversation.json"))

# Later: restore conversation
messages = load_prompt(Path("conversation.json"))
agent.message_history.extend(messages)

Best Practices

Use Prompt.user() and Prompt.assistant() for type-safe message creation. The helper automatically handles various content types including files and resources.
When loading prompts from JSON, ensure the file was created by Fast Agent’s serialization system to preserve all metadata including tool calls and channels.
Template files with @include directives automatically resolve resources relative to the template file location. This makes it easy to bundle prompts with their referenced files.

MCP Client

Connect to MCP servers

Resources

Work with MCP resources

Build docs developers (and LLMs) love