Skip to main content
The Client class is the abstract base class for all LLM provider implementations in Verifiers. It defines the interface for converting between Verifiers’ unified types and provider-specific formats.

Overview

The Client class is a generic abstract base class that handles:
  • Converting Verifiers messages to provider-native formats
  • Converting Verifiers tools to provider-native formats
  • Getting responses from LLM providers
  • Converting provider responses back to Verifiers format
  • Error handling and authentication

Type Parameters

The Client class is generic over four types:
  • ClientT - The native client type (e.g., AsyncOpenAI, AsyncAnthropic)
  • MessagesT - The native messages format
  • ResponseT - The native response type
  • ToolT - The native tool format

Constructor

Client(client_or_config: ClientT | ClientConfig)
client_or_config
ClientT | ClientConfig
required
Either a pre-configured native client instance or a ClientConfig object to set up a new client.

Example

from verifiers.clients.openai_chat_completions_client import OpenAIChatCompletionsClient
from verifiers.types import ClientConfig

# Using ClientConfig
config = ClientConfig(
    api_key="sk-...",
    base_url="https://api.openai.com/v1"
)
client = OpenAIChatCompletionsClient(config)

# Using pre-configured client
from openai import AsyncOpenAI
native_client = AsyncOpenAI(api_key="sk-...")
client = OpenAIChatCompletionsClient(native_client)

Properties

client

@property
def client(self) -> ClientT
Returns the underlying native client instance.

Abstract Methods

Subclasses must implement the following methods:

setup_client

@abstractmethod
def setup_client(self, config: ClientConfig) -> ClientT
Creates and configures the native client from a ClientConfig.
config
ClientConfig
required
Configuration object containing API keys, base URL, and other settings.
Returns: The configured native client instance.

to_native_tool

@abstractmethod
async def to_native_tool(self, tool: Tool) -> ToolT
Converts a Verifiers Tool to the provider’s native tool format.
tool
Tool
required
A Verifiers tool definition.
Returns: The tool in the provider’s native format.

to_native_prompt

@abstractmethod
async def to_native_prompt(self, messages: Messages) -> tuple[MessagesT, dict]
Converts Verifiers Messages to the provider’s native prompt format.
messages
Messages
required
List of Verifiers message objects.
Returns: A tuple of (native_messages, extra_kwargs) where extra_kwargs are additional parameters to pass to get_native_response.

get_native_response

@abstractmethod
async def get_native_response(
    self,
    prompt: MessagesT,
    model: str,
    sampling_args: SamplingArgs,
    tools: list[ToolT] | None = None,
    **kwargs
) -> ResponseT
Gets a response from the provider using their native API.
prompt
MessagesT
required
The prompt in the provider’s native format.
model
str
required
Model identifier (e.g., "gpt-4", "claude-3-5-sonnet-20241022").
sampling_args
SamplingArgs
required
Sampling parameters like temperature, max_tokens, etc.
tools
list[ToolT] | None
default:"None"
Optional list of tools in the provider’s native format.
Returns: The provider’s native response object.

raise_from_native_response

@abstractmethod
async def raise_from_native_response(self, response: ResponseT) -> None
Validates the native response and raises ModelError if invalid.
response
ResponseT
required
The provider’s native response object.
Raises: ModelError subclasses like EmptyModelResponseError or InvalidModelResponseError.

from_native_response

@abstractmethod
async def from_native_response(self, response: ResponseT) -> Response
Converts the provider’s native response to a Verifiers Response.
response
ResponseT
required
The provider’s native response object.
Returns: A Verifiers Response object.

close

@abstractmethod
async def close(self) -> None
Closes the underlying client connection, if applicable.

Public Methods

to_native_tools

async def to_native_tools(self, tools: list[Tool] | None) -> list[ToolT] | None
Converts a list of Verifiers tools to the provider’s native tool format.
tools
list[Tool] | None
required
List of Verifiers tool definitions, or None.
Returns: List of tools in the provider’s native format, or None if input was None.

get_response

async def get_response(
    self,
    prompt: Messages,
    model: str,
    sampling_args: SamplingArgs,
    tools: list[Tool] | None = None,
    **kwargs
) -> Response
Main method to get a response from the LLM. Handles the full conversion pipeline:
  1. Converts Verifiers messages to native format
  2. Converts Verifiers tools to native format
  3. Gets the native response
  4. Validates the response
  5. Converts back to Verifiers format
prompt
Messages
required
List of Verifiers message objects.
model
str
required
Model identifier.
sampling_args
SamplingArgs
required
Sampling parameters.
tools
list[Tool] | None
default:"None"
Optional list of Verifiers tool definitions.
Returns: A Verifiers Response object. Raises:
  • Authentication errors from the provider (re-raised)
  • ModelError for other provider errors

Example

from verifiers.clients.openai_chat_completions_client import OpenAIChatCompletionsClient
from verifiers.types import ClientConfig, UserMessage, SamplingArgs

client = OpenAIChatCompletionsClient(
    ClientConfig(api_key="sk-...")
)

messages = [UserMessage(content="What is 2+2?")]
sampling_args = SamplingArgs(
    temperature=0.7,
    max_tokens=100
)

response = await client.get_response(
    prompt=messages,
    model="gpt-4",
    sampling_args=sampling_args
)

print(response.message.content)  # "2+2 equals 4."

Error Handling

The Client class catches and handles errors:
  • Verifiers errors (Error subclasses): Re-raised as-is
  • Authentication errors: Re-raised from the provider
  • All other exceptions: Wrapped in ModelError

See Also

Build docs developers (and LLMs) love