Skip to main content
The Verifiers library provides a comprehensive CLI for building, evaluating, and managing RL environments for LLMs. Commands are organized under the prime namespace.

Command Structure

The CLI is organized into logical command groups:

Workspace Management

Environment Management

Evaluation

Legacy Commands

For backward compatibility, legacy vf-* commands are still available. See Legacy vf-* Commands for details.

Installation

Install the prime CLI using uv:
uv tool install prime
For development, install verifiers with the CLI tools:
uv add verifiers

Getting Help

All commands support the --help flag for detailed usage information:
prime --help
prime lab setup --help
prime eval run --help

Quick Start

Here’s a typical workflow:
# Set up a new workspace
prime lab setup

# Create a new environment
prime env init my-env

# Install the environment
prime env install my-env

# Run an evaluation
prime eval run my-env -m gpt-4.1-mini -n 10

# View results
prime eval tui

# Publish to Hub
prime env push my-env

Configuration

The CLI uses configuration files in the configs/ directory:
  • configs/endpoints.toml - API endpoint configuration for model providers
  • configs/eval/*.toml - Multi-environment evaluation configurations
  • configs/rl/*.toml - Training configurations for Hosted Training
  • configs/gepa/*.toml - Prompt optimization configurations

Environment Variables

API keys are configured via environment variables:
  • PRIME_API_KEY - Prime Inference API key (default provider)
  • OPENAI_API_KEY - OpenAI API key
  • ANTHROPIC_API_KEY - Anthropic API key
  • OPENROUTER_API_KEY - OpenRouter API key
See individual command pages for provider-specific configuration.

Build docs developers (and LLMs) love