The OpenAICompletionsClient class provides a wrapper around the OpenAI Completions API (legacy text completion endpoint) using the AsyncOpenAI client.
Overview
This client implements the Client interface for OpenAI’s legacy Completions API, which is used for text completion models. It handles:
- Message conversion to plain text format (Completions API only accepts text prompts)
- Text-only content validation (rejects images and other multimodal content)
- Token usage and logprobs parsing
- Context length error handling
The Completions API does not support tools, function calling, or multimodal content. Use OpenAIChatCompletionsClient for these features.
Type Aliases
OpenAITextMessages = str
OpenAITextResponse = Completion
Class Definition
class OpenAICompletionsClient(
Client[
AsyncOpenAI,
OpenAITextMessages,
OpenAITextResponse,
None,
]
)
Generic type parameters:
- ClientT:
AsyncOpenAI - The OpenAI async client
- MessagesT:
OpenAITextMessages - Plain text string (concatenated messages)
- ResponseT:
OpenAITextResponse - OpenAI Completion object
- ToolT:
None - Tools are not supported
Constructor
OpenAICompletionsClient(client_or_config: AsyncOpenAI | ClientConfig)
client_or_config
AsyncOpenAI | ClientConfig
required
Either a pre-configured AsyncOpenAI client or a ClientConfig to create one.
Example
from verifiers.clients.openai_completions_client import OpenAICompletionsClient
from verifiers.types import ClientConfig
# Using ClientConfig
client = OpenAICompletionsClient(
ClientConfig(
api_key="sk-...",
base_url="https://api.openai.com/v1"
)
)
# Using pre-configured AsyncOpenAI client
from openai import AsyncOpenAI
client = OpenAICompletionsClient(
AsyncOpenAI(api_key="sk-...", base_url="https://api.openai.com/v1")
)
Methods
setup_client
def setup_client(self, config: ClientConfig) -> AsyncOpenAI
Creates an AsyncOpenAI client from a ClientConfig.
Configuration with API key, base URL, and other settings.
Returns: Configured AsyncOpenAI instance.
close
async def close(self) -> None
Closes the underlying AsyncOpenAI client connection.
to_native_prompt
async def to_native_prompt(
self, messages: Messages
) -> tuple[OpenAITextMessages, dict]
Converts Verifiers messages to plain text format for the Completions API.
List of Verifiers message objects. All messages will be concatenated with double newlines.
Returns: Tuple of (text_prompt, extra_kwargs). The text prompt is a string with all message contents joined by "\n\n". The extra_kwargs dict is currently empty.
Raises:
ValueError if any message contains non-text content (e.g., images)
async def to_native_tool(self, tool: Tool) -> None
Not supported for Completions API.
A Verifiers tool definition.
Raises: ValueError with message “Tools are not supported for Completions API”
get_native_response
@handle_openai_overlong_prompt
async def get_native_response(
self,
prompt: OpenAITextMessages,
model: str,
sampling_args: SamplingArgs,
tools: list[None] | None = None,
**kwargs,
) -> OpenAITextResponse
Calls the OpenAI Completions API and returns the native response.
prompt
OpenAITextMessages
required
Plain text prompt string.
OpenAI model identifier (e.g., "gpt-3.5-turbo-instruct").
Sampling parameters. None values are filtered out before sending to API.
tools
list[None] | None
default:"None"
Must be None or empty. Tools are not supported.
Returns: OpenAI Completion object.
Raises:
ValueError if tools are provided
OverlongPromptError if the prompt exceeds the model’s context length
raise_from_native_response
async def raise_from_native_response(self, response: OpenAITextResponse) -> None
Validates the OpenAI response and raises errors if invalid.
response
OpenAITextResponse
required
The OpenAI Completion response.
Raises:
EmptyModelResponseError if response is None, has no choices, or the text is empty
InvalidModelResponseError if the response has more than 1 choice
from_native_response
async def from_native_response(self, response: OpenAITextResponse) -> Response
Converts an OpenAI Completion to a Verifiers Response.
response
OpenAITextResponse
required
The OpenAI Completion response.
Returns: Verifiers Response object with:
id: Response ID from OpenAI
created: Timestamp from OpenAI
model: Model name from response
usage: Token counts (prompt_tokens, completion_tokens, total_tokens, reasoning_tokens=0)
message: Response message with text content and metadata
Parsed fields:
- Content: Text from
response.choices[0].text
- Finish reason: Mapped from OpenAI values (
"stop" → "stop", "length" → "length", others → None)
- Is truncated:
True if finish_reason is "length"
- Tokens: If available (vLLM with
return_tokens=true), includes prompt_token_ids, token_ids, logprobs
- Reasoning content: Always
None (not supported by Completions API)
- Tool calls: Always
None (not supported by Completions API)
Usage Example
import asyncio
from verifiers.clients.openai_completions_client import OpenAICompletionsClient
from verifiers.types import (
ClientConfig,
UserMessage,
SamplingArgs,
)
async def main():
# Initialize client
client = OpenAICompletionsClient(
ClientConfig(api_key="sk-...")
)
# Simple completion
messages = [UserMessage(content="The capital of France is")]
sampling_args = SamplingArgs(
temperature=0.7,
max_tokens=50
)
response = await client.get_response(
prompt=messages,
model="gpt-3.5-turbo-instruct",
sampling_args=sampling_args
)
print(response.message.content)
# " Paris."
# Multi-message conversation (concatenated)
messages = [
UserMessage(content="User: What is 2+2?"),
UserMessage(content="Assistant: 2+2 equals 4."),
UserMessage(content="User: What about 3+3?"),
]
response = await client.get_response(
prompt=messages,
model="gpt-3.5-turbo-instruct",
sampling_args=sampling_args
)
print(response.message.content)
# "Assistant: 3+3 equals 6."
await client.close()
asyncio.run(main())
Limitations
# This will raise ValueError
try:
await client.get_response(
prompt=messages,
model="gpt-3.5-turbo-instruct",
sampling_args=sampling_args,
tools=[some_tool] # NOT SUPPORTED
)
except ValueError as e:
print(e) # "Completions API does not support tools..."
No Multimodal Content
# This will raise ValueError
from verifiers.types import ImageContent
try:
messages = [
UserMessage(content=[
TextContent(text="What's in this image?"),
ImageContent(url="https://example.com/image.jpg")
])
]
await client.to_native_prompt(messages)
except ValueError as e:
print(e) # "Completions API does not support non-text content..."
Token Details
The client attempts to parse token-level information from the response:
response = await client.get_response(...)
if response.message.tokens:
# Available when using vLLM with return_tokens=true
print("Prompt token IDs:", response.message.tokens.prompt_ids)
print("Completion token IDs:", response.message.tokens.completion_ids)
print("Log probabilities:", response.message.tokens.completion_logprobs)
Error Handling
The @handle_openai_overlong_prompt decorator catches BadRequestError and converts context length errors to OverlongPromptError. It detects phrases like:
- “this model’s maximum context length is”
- “is longer than the model’s context length”
- “prompt_too_long”
- “context length”
Authentication and permission errors are re-raised without wrapping.
See Also