Skip to main content

Overview

The prime lab setup command initializes a new workspace for developing environments with Verifiers. It creates the recommended directory structure, downloads starter configurations, and optionally installs prime-rl for local training.

Usage

prime lab setup [OPTIONS]

What It Does

  1. Initializes a Python project (if not already present) using uv init
  2. Installs verifiers via uv add verifiers
  3. Creates directory structure:
    configs/
    ├── endpoints.toml      # API endpoint configuration
    ├── rl/                 # Training configs for Hosted Training
    ├── eval/               # Multi-environment eval configs
    └── gepa/               # Prompt optimization configs
    .prime/
    └── skills/             # Bundled workflow skills
    environments/
    └── AGENTS.md           # Documentation for AI coding agents
    AGENTS.md               # Top-level agent documentation
    CLAUDE.md               # Claude-specific pointer
    
  4. Downloads starter files from the verifiers repository
  5. Optionally installs prime-rl for local training

Options

--prime-rl
flag
Install prime-rl and download prime-rl training configurations. Use this if you want to train models locally.
--skip-install
flag
Skip uv init and uv add verifiers steps. Use when adding to an existing project.
--skip-agents-md
flag
Skip downloading AGENTS.md, CLAUDE.md, and environments/AGENTS.md files.
--agents
string
Comma-separated list of coding agents to scaffold (codex, claude, cursor, opencode, amp).Example: --agents codex,claude
--no-interactive
flag
Disable interactive prompts for coding agent selection.

Examples

Basic Setup

Initialize a new workspace in an empty directory:
prime lab setup
Output:
No pyproject.toml found, initializing uv project...
Running: uv init
Running: uv add verifiers
Downloaded configs/endpoints.toml from https://github.com/primeintellect-ai/verifiers
Downloaded AGENTS.md from https://github.com/primeintellect-ai/verifiers

Add to Existing Project

Add verifiers to an existing Python project:
# First add verifiers to your project
uv add verifiers

# Then run setup without the install step
prime lab setup --skip-install

Setup with Local Training

Install prime-rl for training models locally:
prime lab setup --prime-rl
This will:
  • Clone and install prime-rl
  • Download prime-rl training configurations
  • Install all environments from environments/ into the prime-rl workspace

Specify Coding Agents

Configure skill directories for specific coding agents:
prime lab setup --agents codex,opencode

Next Steps

After running prime lab setup, you can:
  1. Create an environment:
    prime env init my-env
    
  2. Run an evaluation:
    prime eval run my-env -m gpt-4.1-mini -n 5
    
  3. View results:
    prime eval tui
    
  4. Train a model (if you used --prime-rl):
    uv run prime-rl configs/prime-rl/wiki-search.toml
    

Configuration Files

The setup command downloads several starter configuration files:

Endpoints Configuration

configs/endpoints.toml defines model API endpoints:
[[endpoint]]
endpoint_id = "gpt-4.1-mini"
model = "gpt-4.1-mini"
url = "https://api.openai.com/v1"
key = "OPENAI_API_KEY"

[[endpoint]]
endpoint_id = "claude-sonnet"
model = "claude-sonnet-4-5-20250929"
url = "https://api.anthropic.com"
key = "ANTHROPIC_API_KEY"
api_client_type = "anthropic_messages"

Evaluation Configs

configs/eval/ contains multi-environment evaluation configurations:
# configs/eval/multi-env.toml
model = "openai/gpt-4.1-mini"
num_examples = 50

[[eval]]
env_id = "gsm8k"
num_examples = 100

[[eval]]
env_id = "math-python"

Training Configs

configs/rl/ contains training configurations for use with Hosted Training or prime-rl.

Troubleshooting

uv Not Found

If uv is not installed:
curl -LsSf https://astral.sh/uv/install.sh | sh

Permission Errors

Ensure you have write permissions in the target directory.

Git Conflicts

If you’re in a git repository, the setup may create files that conflict with existing files. Review changes carefully before committing.

Build docs developers (and LLMs) love