Create custom model providers using Rhai scripting
Custom providers allow you to integrate any AI model API with the Circuit Breaker Labs CLI using Rhai scripting. This enables safety testing for proprietary models, internal deployments, or any non-standard API.
Rhai is a simple, embedded scripting language designed for Rust applications. It has a JavaScript-like syntax and is used by the CLI to translate between the standard Circuit Breaker Labs message format and your custom API’s format.
You don’t need to be a Rhai expert to create custom providers. The examples below cover all common use cases.
Every custom provider script must implement two functions:
// Build the request body that will be POST'd to the endpoint// messages: array of #{role: String, content: String}// returns: a map that will be serialized to JSONfn build_request(messages) { // Transform messages into your API's format #{ "model": "your-model-name", "messages": messages }}// Parse the response body and extract the assistant's message// body: the full deserialized JSON response as a Rhai dynamic// returns: String containing the assistant's message contentfn parse_response(body) { // Extract the assistant's message from your API's response body["choices"][0]["message"]["content"].to_string()}
Both functions are required. The CLI will fail if either is missing or has the wrong signature.
// Example OpenAI-compatible provider script// This script implements the OpenAI chat completions API spec// Build the request body that will be POST'd to the endpoint// messages: array of #{role: String, content: String}// returns: a map that will be serialized to JSONfn build_request(messages) { #{ "model": "gpt-4o", "messages": messages }}// Parse the response body and extract the assistant's message// body: the full deserialized JSON response as a Rhai dynamic// returns: String containing the assistant's message contentfn parse_response(body) { body["choices"][0]["message"]["content"].to_string()}
// Example OpenAI-compatible provider script// This script implements the OpenAI responses API spec// Build the request body that will be POST'd to the endpoint// messages: array of #{role: String, content: String}// returns: a map that will be serialized to JSONfn build_request(messages) { #{ "model": "gpt-4o", "input": messages }}// Parse the response body and extract the assistant's message// body: the full deserialized JSON response as a Rhai dynamic// returns: String containing the assistant's message contentfn parse_response(body) { body["output"][0]["content"][0]["text"].to_string()}
// Example Ollama provider script// This script implements the Ollama chat API spec// Build the request body that will be POST'd to the endpoint// messages: array of #{role: String, content: String}// returns: a map that will be serialized to JSONfn build_request(messages) { #{ "model": "llama3.2", "messages": messages, "stream": false }}// Parse the response body and extract the assistant's message// body: the full deserialized JSON response as a Rhai dynamic// returns: String containing the assistant's message contentfn parse_response(body) { body["message"]["content"].to_string()}
// Example Ollama provider script// This script implements the Ollama completions API spec// Build the request body that will be POST'd to the endpoint// messages: array of #{role: String, content: String}// returns: a map that will be serialized to JSONfn build_request(messages) { #{ "model": "llama3.2", "messages": messages, }}// Parse the response body and extract the assistant's message// body: the full deserialized JSON response as a Rhai dynamic// returns: String containing the assistant's message contentfn parse_response(body) { body["choices"][0]["message"]["content"].to_string()}
Create a .rhai file with build_request() and parse_response() functions:
fn build_request(messages) { // Your request transformation here #{ "your_field": messages }}fn parse_response(body) { // Your response parsing here body["your_response_field"].to_string()}
3
Test the Script
Run a simple evaluation to verify the script works:
Authentication is typically handled via HTTP headers passed from environment variables:
# Set your API keyexport CUSTOM_API_KEY="your-api-key"# Run with custom providercbl single-turn \ --threshold 0.5 \ --variations 2 \ --maximum-iteration-layers 2 \ custom \ --url https://your-api.com/completions \ --script ./provider.rhai
The CLI automatically includes standard headers. Your API key should be configured according to your API’s authentication requirements (Bearer token, API key header, etc.).
If you need custom headers, they can be set at the HTTP client level. Contact the Circuit Breaker Labs team if you need advanced header customization.