Skip to main content

Welcome to Circuit Breaker Labs CLI

The Circuit Breaker Labs CLI is a powerful command-line tool for evaluating the safety of AI language models. Test your models against adversarial prompts, measure safety thresholds, and ensure your AI systems respond appropriately to harmful content.

Quick Start

Get up and running with your first evaluation in minutes

Installation

Install the CLI for Linux, Mac, or Windows

Command Reference

Explore all available commands and options

Custom Providers

Integrate custom model endpoints with Rhai scripting

Key Features

Single & Multi-Turn Evaluations

Test models with both single-turn prompts and multi-turn conversations

Multiple Providers

Support for OpenAI, Ollama, and custom model endpoints

Interactive TUI

Real-time progress visualization with an interactive terminal interface

Configurable Thresholds

Set custom safety score thresholds for your use case

Custom Scripting

Integrate any API using Rhai scripting language

JSON Output

Export detailed evaluation results in JSON format

How It Works

The Circuit Breaker Labs CLI connects to the Circuit Breaker Labs API to retrieve adversarial test cases, sends them to your language model, and evaluates the responses for safety concerns.
1

Configure Your API Keys

Set your Circuit Breaker Labs API key and model provider credentials
2

Choose Evaluation Type

Run single-turn evaluations for quick tests or multi-turn for conversational safety
3

Select Provider

Use OpenAI, Ollama, or integrate your custom model endpoint
4

Review Results

Analyze safety scores and detailed evaluation results in JSON format

Get Started

Ready to evaluate your model?

Follow our quickstart guide to run your first evaluation

Build docs developers (and LLMs) love