Skip to main content

Requirements

Verifiers requires Python 3.10 or later (up to Python 3.13).

Installation Methods

Optional Features

Verifiers includes several optional feature sets that can be installed as extras:

rl

Reinforcement LearningIncludes PyTorch, transformers, vLLM, and training utilities.
uv add 'verifiers[rl]'

rg

Reasoning GymIntegration with reasoning-gym environments.
uv add 'verifiers[rg]'

ta

TextArenaText-based game environments.
uv add 'verifiers[ta]'

browser

Browser AutomationWeb browsing environments with Browserbase.
uv add 'verifiers[browser]'

openenv

OpenEnvOpenEnv environment integration.
uv add 'verifiers[openenv]'

Prime CLI Installation

The Prime CLI provides tools for environment management, evaluation, and training:
# Install Prime CLI
uv tool install prime

# Verify installation
prime --version

# Log in to Prime Intellect
prime login
The Prime CLI is required for:
  • Environment initialization and publishing
  • Running evaluations
  • Managing training jobs
  • Accessing the Environments Hub

Verifying Installation

Verify your Verifiers installation:
import verifiers as vf

print(f"Verifiers version: {vf.__version__}")

# Test with a simple environment
from datasets import Dataset

def load_environment():
    dataset = Dataset.from_list([
        {"prompt": [{"role": "user", "content": "Hello!"}]}
    ])
    
    async def dummy_reward(completion) -> float:
        return 1.0
    
    rubric = vf.Rubric(funcs=[dummy_reward])
    return vf.SingleTurnEnv(dataset=dataset, rubric=rubric)

env = load_environment()
print(f"Environment loaded: {type(env).__name__}")

Setting Up a Workspace

After installing Verifiers, set up a development workspace:
1

Navigate to Your Project Directory

cd ~/dev/my-project
2

Run Workspace Setup

prime lab setup
This creates:
  • Python project structure (if needed)
  • Configuration directory (configs/)
  • Environment directory (environments/)
  • Example configuration files
3

Verify Setup

ls -la
# Should show: configs/, environments/, pyproject.toml, etc.

Configuration

API Endpoints

Configure API endpoints for model inference in configs/endpoints.toml:
# Prime Inference (default)
[prime]
base_url = "https://api.primeintellect.ai/v1"
api_key_env = "PRIME_API_KEY"

# Custom OpenAI-compatible endpoint
[custom]
base_url = "https://your-api.com/v1"
api_key_env = "YOUR_API_KEY"

# Local vLLM server
[local]
base_url = "http://localhost:8000/v1"

Environment Variables

Set up required environment variables:
# Prime Intellect API key
export PRIME_API_KEY="your-prime-api-key"

# OpenAI API key (if using OpenAI models)
export OPENAI_API_KEY="your-openai-api-key"

# Anthropic API key (if using Claude)
export ANTHROPIC_API_KEY="your-anthropic-api-key"
Store environment variables in a .env file and load them with:
export $(cat .env | xargs)

Updating Verifiers

# Update to latest version
uv add --upgrade verifiers

# Update to specific version
uv add verifiers==0.1.9

Dependencies

Core Dependencies

Verifiers includes these core dependencies:
  • anthropic - Anthropic API client
  • datasets - Hugging Face datasets
  • openai - OpenAI API client
  • pydantic - Data validation
  • rich - Terminal formatting
  • textual - Terminal UI
  • prime-sandboxes - Sandboxed code execution
  • mcp - Model Context Protocol support

RL Dependencies (Optional)

With verifiers[rl]:
  • torch - PyTorch deep learning framework
  • transformers - Hugging Face transformers
  • vllm - Fast LLM inference
  • accelerate - Distributed training
  • peft - Parameter-efficient fine-tuning
  • wandb - Experiment tracking
  • deepspeed - Training optimization
  • flash-attn - Optimized attention

GPU Support

For GPU-accelerated training:
1

Install CUDA Toolkit

Install CUDA 12.1 or later from NVIDIA.
2

Install Verifiers with RL

uv add 'verifiers[rl]'
3

Install Flash Attention (Optional)

For optimized attention:
uv pip install flash-attn --no-build-isolation
4

Verify GPU Access

import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU count: {torch.cuda.device_count()}")

Troubleshooting

Ensure you’re in the correct virtual environment:
# Check Python location
which python

# Reinstall if needed
uv add --reinstall verifiers
Check your CUDA installation:
# Check CUDA version
nvcc --version

# Check PyTorch CUDA
python -c "import torch; print(torch.version.cuda)"
Reinstall PyTorch with CUDA support if needed.
Create a fresh virtual environment:
uv venv --python 3.12 .venv
source .venv/bin/activate
uv add verifiers
Ensure uv tools are in your PATH:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"

# Reload shell
source ~/.bashrc

Next Steps

Quick Start Guide

Create your first environment in minutes

Environment Guide

Learn about datasets, rubrics, and tools

Examples

Browse example environments

API Reference

Explore the complete API

Build docs developers (and LLMs) love