Skip to main content

Overview

The generate command uses an LLM (Large Language Model) to automatically generate trigger test cases for a skill. It reads the SKILL.md file and creates comprehensive test cases across all four trigger types.

Usage

sklab generate [SKILL_PATH] [OPTIONS]

Arguments

SKILL_PATH
Path
Path to the skill directory. Defaults to current directory if not specified.

Options

--model
string
Alias: -mAnthropic model ID to use for generation. Defaults to claude-haiku-4-5-20251001.Can also be set via SKLAB_MODEL environment variable.
sklab generate --model claude-sonnet-4-5-20250929
--force
boolean
default:false
Overwrite existing triggers.yaml file without prompting.
sklab generate --force

Prerequisites

Install anthropic package

The anthropic package is an optional dependency:
pip install skill-lab[generate]

Set API key

Set the ANTHROPIC_API_KEY environment variable:
export ANTHROPIC_API_KEY=sk-ant-...

Examples

Generate tests for current directory

sklab generate

Generate tests with specific model

sklab generate --model claude-sonnet-4-5-20250929

Force overwrite existing tests

sklab generate --force

Generate for specific skill

sklab generate ./skills/data-analysis

Output

Generates .skill-lab/tests/triggers.yaml with approximately 10-12 test cases:
  • 2-3 explicit tests (direct invocation with $ prefix)
  • 2-3 implicit tests (scenario descriptions)
  • 2-3 contextual tests (realistic prompts with noise)
  • 2-3 negative tests (should not trigger)

Example Output

Generating trigger tests...

Generated 12 trigger tests:
  explicit: 3
  implicit: 3
  contextual: 3
  negative: 3

Tokens: 1,234 in + 2,456 out = 3,690 ($0.0123)

Written to: .skill-lab/tests/triggers.yaml
Run sklab trigger to execute them.

Configuration Priority

Model selection follows this priority:
  1. --model flag
  2. SKLAB_MODEL environment variable
  3. Default: claude-haiku-4-5-20251001
# Using flag (highest priority)
sklab generate --model claude-sonnet-4-5-20250929

# Using environment variable
export SKLAB_MODEL=claude-sonnet-4-5-20250929
sklab generate

# Using default
sklab generate

Token Usage and Cost

The command displays token usage after generation:
  • Input tokens (skill content)
  • Output tokens (generated tests)
  • Total tokens
  • Estimated cost (if available for the model)
Cost estimates are based on Anthropic’s published pricing and may not reflect actual charges. Check your Anthropic dashboard for accurate billing.

Generated Test Structure

The generated triggers.yaml file follows this structure:
test_cases:
  - name: "direct-skill-invocation"
    type: "explicit"
    prompt: "Use $skill-name to..."
    
  - name: "scenario-description"
    type: "implicit"
    prompt: "I need to analyze..."
    
  - name: "real-world-context"
    type: "contextual"
    prompt: "Our team has been working on..."
    
  - name: "unrelated-task"
    type: "negative"
    prompt: "Show me the sales report..."

Exit Codes

  • 0: Tests generated successfully or user aborted
  • 1: Error occurred (missing API key, invalid path, etc.)

Error Handling

Missing anthropic package

Error: The 'anthropic' package is required for test generation.
Install it with: pip install skill-lab[generate]

Missing API key

Error: ANTHROPIC_API_KEY environment variable is not set.
Set it with: export ANTHROPIC_API_KEY=sk-...

Existing file (without —force)

Trigger tests already exist at .skill-lab/tests/triggers.yaml. Overwrite? [y/N]:

Notes

The LLM generates test cases based on the skill’s description and content. Review and edit the generated tests to ensure they match your testing requirements.
Generation typically takes 5-15 seconds depending on the model and skill complexity.

Build docs developers (and LLMs) love