Skip to main content
Skill Lab supports two output modes across all commands: rich-formatted console output (default) and machine-readable JSON. This guide covers output format options, use cases, and integration patterns.

Overview

All Skill Lab commands support two output formats:
FormatUse CaseAvailability
ConsoleHuman-readable terminal output with colors and tablesDefault for all commands
JSONMachine-readable structured data--format json or --json flag
Console output uses Rich for formatting. JSON output follows a consistent schema across commands.

Console Output

Default Behavior

Console output is the default for all commands:
sklab evaluate ./my-skill
sklab trigger ./my-skill
sklab info ./my-skill
sklab list-checks
Console output features:
  • Color-coded severity levels (ERROR=red, WARNING=yellow, INFO=blue)
  • Rich tables for check results and test reports
  • Progress indicators for long-running commands
  • Formatted panels for summaries

Example: Static Evaluation (Console)

sklab evaluate ./my-skill
╭─────────────────────── Skill Lab Evaluation ──────────────────────╮
│ Skill: my-skill                                                   │
│ Path: /path/to/my-skill                                           │
╰───────────────────────────────────────────────────────────────────╯

Quality Score: 87.5/100
Status: PASS
Checks: 26/28 passed
Duration: 45.3ms

                        Failed Checks
┌────────┬──────────┬──────────────────┬──────────────────────┐
│ Status │ Severity │ Check            │ Message              │
├────────┼──────────┼──────────────────┼──────────────────────┤
│ !      │ WARNING  │ content.examples │ No examples found    │
│ i      │ INFO     │ content.lines    │ 152 lines (over 100) │
└────────┴──────────┴──────────────────┴──────────────────────┘

(24 passing checks hidden, run with --verbose to see all)

Summary by Dimension:
  structure: 7/7 passed
  description: 9/9 passed
  naming: 1/1 passed
  content: 9/11 passed

Example: Trigger Testing (Console)

sklab trigger ./my-skill
Trigger Test Report: my-skill
Runtime: claude │ Duration: 12.3s │ 11/13 passed

╭────────────────────────────────────────────────────────╮
│ Test                              Type        Status  │
├───────────────────────────────────────────────────────┤
│ Direct invocation                 explicit    ✓       │
│ Skill name with context           explicit    ✓       │
│ $ prefix invocation               explicit    ✓       │
│ Describe test scenario            implicit    ✓       │
│ Request without naming            implicit    ✓       │
│ Problem statement                 implicit    ✗       │
│ Realistic noisy prompt            contextual  ✓       │
│ Domain-specific context           contextual  ✓       │
│ Multi-step workflow               contextual  ✓       │
│ Unrelated question                negative    ✓       │
│ Similar domain, wrong task        negative    ✓       │
│ Different skill's territory       negative    ✗       │
│ Edge case confusion               negative    ✓       │
╰────────────────────────────────────────────────────────╯

By type: explicit: 3/3 (100%) │ implicit: 2/3 (67%) │ contextual: 3/3 (100%) │ negative: 3/4 (75%)

Example: Skill Info (Console)

sklab info ./my-skill
╭─────────────────────── my-skill ───────────────────────╮
│ Description: Execute pytest test suite for Python     │
│              projects. Discovers and runs all tests.   │
│ License:     MIT                                       │
│ Compat:      claude-code>=1.0.0                        │
│                                                        │
│ Structure:   scripts/ references/                      │
│ Body:        47 lines                                  │
│                                                        │
│ Tokens (estimated):                                    │
│   Discovery:   ~42 tokens (name + description)         │
│   Activation:  ~487 tokens (full SKILL.md)             │
╰────────────────────────────────────────────────────────╯

JSON Output

Commands Supporting JSON

CommandFlagSchema
sklab evaluate--format jsonEvaluationReport
sklab trigger--format jsonTriggerReport
sklab info--jsonSkill metadata
sklab info uses --json instead of --format json for brevity.

Example: Static Evaluation (JSON)

sklab evaluate ./my-skill --format json
{
  "skill_path": "/path/to/my-skill",
  "skill_name": "my-skill",
  "timestamp": "2026-03-03T14:30:00Z",
  "duration_ms": 45.3,
  "quality_score": 87.5,
  "overall_pass": true,
  "checks_run": 28,
  "checks_passed": 26,
  "checks_failed": 2,
  "results": [
    {
      "check_id": "structure.skill-md-exists",
      "check_name": "SKILL.md Exists",
      "passed": true,
      "severity": "error",
      "dimension": "structure",
      "message": "SKILL.md found",
      "details": null,
      "location": null
    },
    {
      "check_id": "content.examples",
      "check_name": "Has Examples",
      "passed": false,
      "severity": "warning",
      "dimension": "content",
      "message": "Skill body contains no example blocks",
      "details": null,
      "location": null
    }
  ],
  "summary": {
    "by_severity": {
      "error": {"passed": 10, "failed": 0},
      "warning": {"passed": 12, "failed": 1},
      "info": {"passed": 4, "failed": 1}
    },
    "by_dimension": {
      "structure": {"passed": 7, "failed": 0},
      "description": {"passed": 9, "failed": 0},
      "naming": {"passed": 1, "failed": 0},
      "content": {"passed": 9, "failed": 2}
    }
  }
}

Example: Trigger Testing (JSON)

sklab trigger ./my-skill --format json
{
  "skill_path": "/path/to/my-skill",
  "skill_name": "my-skill",
  "timestamp": "2026-03-03T14:30:00Z",
  "duration_ms": 12345.6,
  "runtime": "claude",
  "tests_run": 13,
  "tests_passed": 11,
  "tests_failed": 2,
  "overall_pass": false,
  "pass_rate": 0.846,
  "results": [
    {
      "test_id": "explicit-1",
      "test_name": "Direct invocation",
      "trigger_type": "explicit",
      "passed": true,
      "skill_triggered": true,
      "expected_trigger": true,
      "message": "Test passed: Direct invocation",
      "trace_path": "/path/.skill-lab/traces/explicit-1.jsonl",
      "events_count": 42,
      "exit_code": 0,
      "details": null
    }
  ],
  "summary_by_type": {
    "explicit": {"passed": 3, "failed": 0, "total": 3},
    "implicit": {"passed": 2, "failed": 1, "total": 3},
    "contextual": {"passed": 3, "failed": 0, "total": 3},
    "negative": {"passed": 3, "failed": 1, "total": 4}
  }
}

Example: Skill Info (JSON)

sklab info ./my-skill --json
{
  "name": "my-skill",
  "description": "Execute pytest test suite for Python projects. Discovers and runs all tests in tests/ directory, generates coverage reports.",
  "license": "MIT",
  "compatibility": "claude-code>=1.0.0",
  "structure": ["scripts/", "references/"],
  "body_lines": 47,
  "tokens": {
    "discovery": 42,
    "activation": 487
  }
}

Saving Output to Files

Using --output Flag

Some commands support --output to write directly to a file:
# Static evaluation
sklab evaluate ./my-skill --format json --output report.json

# Trigger testing
sklab trigger ./my-skill --format json --output results.json

Using Shell Redirection

For commands without --output, use shell redirection:
# Redirect stdout to file
sklab info ./my-skill --json > skill-info.json

# Suppress stderr (token estimates)
sklab prompt ./my-skill > skills.xml 2>/dev/null

Integration Patterns

CI/CD Pipeline (GitHub Actions)

name: Skill Quality Report
on: [push, pull_request]

jobs:
  evaluate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      
      - name: Install sklab
        run: pip install skill-lab
      
      - name: Run evaluation
        run: sklab evaluate ./my-skill --format json --output report.json
      
      - name: Upload report
        uses: actions/upload-artifact@v4
        with:
          name: skill-quality-report
          path: report.json
      
      - name: Check quality threshold
        run: |
          SCORE=$(jq -r '.quality_score' report.json)
          if (( $(echo "$SCORE < 80" | bc -l) )); then
            echo "Quality score $SCORE below threshold 80"
            exit 1
          fi

Python Script Integration

import subprocess
import json
from pathlib import Path

def evaluate_skill(skill_path: Path) -> dict:
    """Run sklab evaluate and return JSON report."""
    result = subprocess.run(
        ["sklab", "evaluate", str(skill_path), "--format", "json"],
        capture_output=True,
        text=True,
        check=False
    )
    return json.loads(result.stdout)

def get_failed_checks(report: dict) -> list:
    """Extract failed checks from report."""
    return [
        r for r in report["results"]
        if not r["passed"]
    ]

# Usage
report = evaluate_skill(Path("./my-skill"))
print(f"Quality Score: {report['quality_score']}")
print(f"Overall Pass: {report['overall_pass']}")

failed = get_failed_checks(report)
for check in failed:
    print(f"[{check['severity']}] {check['check_id']}: {check['message']}")

Bash Script Processing

#!/bin/bash
# evaluate-all-skills.sh

for skill_dir in .claude/skills/*/; do
  skill_name=$(basename "$skill_dir")
  echo "Evaluating $skill_name..."
  
  sklab evaluate "$skill_dir" --format json --output "reports/${skill_name}.json"
  
  # Extract quality score
  score=$(jq -r '.quality_score' "reports/${skill_name}.json")
  echo "  Score: $score"
  
  # Check if passed
  passed=$(jq -r '.overall_pass' "reports/${skill_name}.json")
  if [ "$passed" = "false" ]; then
    echo "  ⚠️  Validation failed"
  fi
done

# Generate summary
jq -s '[.[] | {name: .skill_name, score: .quality_score, pass: .overall_pass}]' reports/*.json > summary.json
echo "Summary written to summary.json"

Quality Dashboard

import json
from pathlib import Path
import subprocess

def generate_dashboard(skills_dir: Path):
    """Generate HTML dashboard from skill evaluations."""
    reports = []
    
    for skill_path in skills_dir.glob("*"):
        if not skill_path.is_dir():
            continue
        
        # Run evaluation
        result = subprocess.run(
            ["sklab", "evaluate", str(skill_path), "--format", "json"],
            capture_output=True,
            text=True,
            check=False
        )
        
        if result.returncode == 0 or result.stdout:
            report = json.loads(result.stdout)
            reports.append(report)
    
    # Generate HTML
    html = "<html><body><h1>Skill Quality Dashboard</h1><table>"
    html += "<tr><th>Skill</th><th>Score</th><th>Status</th><th>Checks Passed</th></tr>"
    
    for r in sorted(reports, key=lambda x: x["quality_score"], reverse=True):
        status = "✅ PASS" if r["overall_pass"] else "❌ FAIL"
        html += f"<tr>"
        html += f"<td>{r['skill_name']}</td>"
        html += f"<td>{r['quality_score']:.1f}</td>"
        html += f"<td>{status}</td>"
        html += f"<td>{r['checks_passed']}/{r['checks_run']}</td>"
        html += f"</tr>"
    
    html += "</table></body></html>"
    
    Path("dashboard.html").write_text(html)
    print("Dashboard written to dashboard.html")

generate_dashboard(Path(".claude/skills"))

Field Extraction (sklab info)

The sklab info command supports extracting single fields:
# Extract skill name
sklab info ./my-skill --field name

# Extract token estimates
sklab info ./my-skill --field tokens

# Extract description
sklab info ./my-skill --field description
Available fields: name, description, license, compatibility, structure, body_lines, tokens

Example: Shell Script

#!/bin/bash
# get-skill-tokens.sh

SKILL_DIR=$1

# Get discovery tokens
DISCOVERY=$(sklab info "$SKILL_DIR" --field tokens | jq -r '.discovery')

# Get activation tokens  
ACTIVATION=$(sklab info "$SKILL_DIR" --field tokens | jq -r '.activation')

echo "Discovery: $DISCOVERY tokens"
echo "Activation: $ACTIVATION tokens"

Verbose Mode

For console output, use --verbose to show all checks (not just failures):
# Show only failed checks (default)
sklab evaluate ./my-skill

# Show all checks
sklab evaluate ./my-skill --verbose
JSON output always includes all checks regardless of --verbose.

Exit Codes

All commands return appropriate exit codes for scripting:
Exit CodeMeaning
0Success (all checks passed)
1Failure (validation failed or errors occurred)
# Use in scripts
if sklab validate ./my-skill; then
  echo "Validation passed"
else
  echo "Validation failed"
  exit 1
fi

Next Steps

Static Analysis

Learn about quality checks and scoring

Skill Export

Export skills as prompts in XML, Markdown, or JSON

Build docs developers (and LLMs) love