Skip to main content
The AnthropicMessagesClient class provides a wrapper around the Anthropic Messages API using the AsyncAnthropic client.

Overview

This client implements the Client interface for Anthropic’s Messages API, handling:
  • Message format conversion between Verifiers and Anthropic formats
  • System message extraction and formatting
  • Tool calling support
  • Thinking blocks (extended thinking / chain-of-thought reasoning)
  • Image content handling (data URLs to base64)
  • Context length error handling

Class Definition

class AnthropicMessagesClient(
    Client[
        AsyncAnthropic,
        list[AnthropicMessageParam],
        AnthropicMessage,
        AnthropicToolParam,
    ]
)
Generic type parameters:
  • ClientT: AsyncAnthropic - The Anthropic async client
  • MessagesT: list[AnthropicMessageParam] - List of Anthropic message parameters
  • ResponseT: AnthropicMessage - Anthropic Message object
  • ToolT: AnthropicToolParam - Anthropic tool parameter type

Constructor

AnthropicMessagesClient(client_or_config: AsyncAnthropic | ClientConfig)
client_or_config
AsyncAnthropic | ClientConfig
required
Either a pre-configured AsyncAnthropic client or a ClientConfig to create one.

Example

from verifiers.clients.anthropic_messages_client import AnthropicMessagesClient
from verifiers.types import ClientConfig

# Using ClientConfig
client = AnthropicMessagesClient(
    ClientConfig(
        api_key="sk-ant-...",
        base_url="https://api.anthropic.com"
    )
)

# Using pre-configured AsyncAnthropic client
from anthropic import AsyncAnthropic
client = AnthropicMessagesClient(
    AsyncAnthropic(api_key="sk-ant-...")
)

Methods

setup_client

def setup_client(self, config: ClientConfig) -> AsyncAnthropic
Creates an AsyncAnthropic client from a ClientConfig.
config
ClientConfig
required
Configuration with API key, base URL, and other settings.
Returns: Configured AsyncAnthropic instance.

close

async def close(self) -> None
Closes the underlying AsyncAnthropic client connection.

to_native_prompt

async def to_native_prompt(
    self, messages: Messages
) -> tuple[list[AnthropicMessageParam], dict]
Converts Verifiers messages to Anthropic’s message format.
messages
Messages
required
List of Verifiers message objects (SystemMessage, UserMessage, AssistantMessage, ToolMessage, or TextMessage).
Returns: Tuple of (anthropic_messages, extra_kwargs) where extra_kwargs contains {"system": "<system_content>"} if system messages are present. Special handling:
  • System messages: Extracted and concatenated, passed as system parameter in extra_kwargs
  • Tool messages: Grouped into batches and converted to user messages with tool_result content blocks
  • Thinking blocks: Preserved in assistant messages (for extended thinking models)
  • Images: Data URLs converted to base64 format with proper media type detection
  • Audio: Replaced with [audio] placeholder text
Supported message types:
  • SystemMessage → Extracted as system parameter
  • UserMessageAnthropicMessageParam with role=“user”
  • AssistantMessageAnthropicMessageParam with role=“assistant” (supports tool calls and thinking blocks)
  • ToolMessage → Converted to user message with tool_result blocks
  • TextMessageAnthropicMessageParam with role=“user”

to_native_tool

async def to_native_tool(self, tool: Tool) -> AnthropicToolParam
Converts a Verifiers Tool to Anthropic’s tool parameter format.
tool
Tool
required
Verifiers tool definition with name, description, and parameters.
Returns: AnthropicToolParam object. Note: Anthropic tools use input_schema instead of parameters.

get_native_response

@_handle_anthropic_overlong_prompt
async def get_native_response(
    self,
    prompt: list[AnthropicMessageParam],
    model: str,
    sampling_args: SamplingArgs,
    tools: list[AnthropicToolParam] | None = None,
    **kwargs
) -> AnthropicMessage
Calls the Anthropic Messages API and returns the native response.
prompt
list[AnthropicMessageParam]
required
List of Anthropic message parameters.
model
str
required
Anthropic model identifier (e.g., "claude-3-5-sonnet-20241022", "claude-3-5-haiku-20241022").
sampling_args
SamplingArgs
required
Sampling parameters. max_tokens is required by Anthropic; defaults to 4096 if not provided.
tools
list[AnthropicToolParam] | None
default:"None"
Optional list of tools in Anthropic format.
Returns: Anthropic Message object. Raises: OverlongPromptError if the prompt exceeds the model’s context length. Special handling:
  • max_tokens: Required by Anthropic API, defaults to 4096 with a warning if not provided
  • Filtered parameters: Removes n and stop (not supported by Anthropic)
  • Internal state: Removes state key from kwargs (internal framework field)

raise_from_native_response

async def raise_from_native_response(self, response: AnthropicMessage) -> None
Validates the Anthropic response. Currently a no-op (passes all responses).
response
AnthropicMessage
required
The Anthropic Message response.

from_native_response

async def from_native_response(self, response: AnthropicMessage) -> Response
Converts an Anthropic Message to a Verifiers Response.
response
AnthropicMessage
required
The Anthropic Message response.
Returns: Verifiers Response object with:
  • id: Response ID from Anthropic
  • created: Current timestamp (Anthropic doesn’t provide this)
  • model: Model name from response
  • usage: Token counts (input_tokens, output_tokens, total)
  • message: Response message with content, reasoning content, tool calls, thinking blocks, and finish reason
Parsed content blocks:
  • text: Concatenated into message.content
  • thinking: Concatenated into message.reasoning_content, stored in thinking_blocks
  • redacted_thinking: Stored in thinking_blocks
  • tool_use: Converted to Verifiers ToolCall objects
Finish reasons:
  • "end_turn""stop"
  • "max_tokens""length"
  • "tool_use""tool_calls"
  • Other → None

Usage Example

import asyncio
from verifiers.clients.anthropic_messages_client import AnthropicMessagesClient
from verifiers.types import (
    ClientConfig,
    SystemMessage,
    UserMessage,
    SamplingArgs,
    Tool,
)

async def main():
    # Initialize client
    client = AnthropicMessagesClient(
        ClientConfig(api_key="sk-ant-...")
    )
    
    # Simple conversation with system message
    messages = [
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="What is the capital of France?")
    ]
    sampling_args = SamplingArgs(
        temperature=0.7,
        max_tokens=100
    )
    
    response = await client.get_response(
        prompt=messages,
        model="claude-3-5-sonnet-20241022",
        sampling_args=sampling_args
    )
    
    print(response.message.content)
    # "The capital of France is Paris."
    
    # With tools
    calculator_tool = Tool(
        name="calculate",
        description="Perform a calculation",
        parameters={
            "type": "object",
            "properties": {
                "expression": {
                    "type": "string",
                    "description": "Mathematical expression to evaluate"
                }
            },
            "required": ["expression"]
        }
    )
    
    messages = [UserMessage(content="What is 15 * 23?")]
    response = await client.get_response(
        prompt=messages,
        model="claude-3-5-sonnet-20241022",
        sampling_args=sampling_args,
        tools=[calculator_tool]
    )
    
    if response.message.tool_calls:
        for tool_call in response.message.tool_calls:
            print(f"Tool: {tool_call.name}")
            print(f"Arguments: {tool_call.arguments}")
            # Tool: calculate
            # Arguments: {"expression": "15 * 23"}
    
    await client.close()

asyncio.run(main())

Thinking Blocks Support

Anthropic models with extended thinking (like Claude 3.7 Sonnet with thinking) return thinking and redacted_thinking content blocks. These are:
  • Extracted as reasoning content: Available in response.message.reasoning_content
  • Preserved as thinking blocks: Available in response.message.thinking_blocks
  • Maintained across turns: Thinking blocks from assistant messages are preserved when converting back to native format

Example

response = await client.get_response(
    prompt=[UserMessage(content="Solve this logic puzzle: ...")],
    model="claude-3-7-sonnet-20250219",
    sampling_args=SamplingArgs(max_tokens=2000, thinking={"type": "enabled", "budget_tokens": 1000})
)

if response.message.reasoning_content:
    print("Reasoning:", response.message.reasoning_content)

if response.message.thinking_blocks:
    for block in response.message.thinking_blocks:
        print(f"Block type: {block.type}")
        if hasattr(block, "thinking"):
            print(f"Thinking: {block.thinking}")

Error Handling

Overlong Prompt Errors

The @_handle_anthropic_overlong_prompt decorator catches BadRequestError and converts context length errors to OverlongPromptError. It detects phrases like:
  • “prompt is too long”
  • “exceed context limit”
  • “too many total text bytes”
  • “context length”
  • “input is too long”
Authentication and permission errors are re-raised without wrapping.

Image Handling

The client converts OpenAI-style image URLs to Anthropic’s base64 format:
# Input (OpenAI format)
UserMessage(content=[
    {"type": "image_url", "image_url": {"url": "data:image/png;base64,iVBORw0KG..."}}
])

# Converted to Anthropic format
{
    "role": "user",
    "content": [
        {
            "type": "image",
            "source": {
                "type": "base64",
                "media_type": "image/png",
                "data": "iVBORw0KG..."
            }
        }
    ]
}

See Also

Build docs developers (and LLMs) love