Skip to main content

Overview

The info command displays skill metadata and token usage estimates. It provides quick insight into a skill’s structure, content size, and expected token costs for discovery and activation.

Usage

sklab info [SKILL_PATH] [OPTIONS]

Arguments

SKILL_PATH
Path
Path to the skill directory. Defaults to current directory if not specified.

Options

--json
boolean
default:false
Output as JSON for machine-readable format (pipe-friendly).
sklab info --json
--field
string
Alias: -fExtract a single field value from the skill metadata.Available fields:
  • name - Skill name
  • description - Skill description
  • license - License type
  • compatibility - Compatibility information
  • structure - List of subfolders
  • body_lines - Number of lines in skill body
  • tokens - Token estimates (returns JSON object)
sklab info --field name
sklab info --field tokens

Examples

Display skill info (default format)

sklab info
Output:
┌─ sentiment-analysis ────────────────────────────┐
│ Description: Analyze text for sentiment and     │
│              emotional tone                      │
│ License:     MIT                                 │
│ Compat:      claude-3.5+                        │
│                                                  │
│ Structure:   scripts/ references/               │
│ Body:        145 lines                          │
│                                                  │
│ Tokens (estimated):                              │
│   Discovery:   ~87 tokens (name + description)  │
│   Activation:  ~1,234 tokens (full SKILL.md)    │
└──────────────────────────────────────────────────┘

Get JSON output

sklab info --json
Output:
{
  "name": "sentiment-analysis",
  "description": "Analyze text for sentiment and emotional tone",
  "license": "MIT",
  "compatibility": "claude-3.5+",
  "structure": ["scripts/", "references/"],
  "body_lines": 145,
  "tokens": {
    "discovery": 87,
    "activation": 1234
  }
}

Extract specific field

sklab info --field name
# Output: sentiment-analysis

sklab info --field description
# Output: Analyze text for sentiment and emotional tone

sklab info --field tokens
# Output: {"discovery": 87, "activation": 1234}

Use in scripts

# Get skill name
SKILL_NAME=$(sklab info --field name)
echo "Processing skill: $SKILL_NAME"

# Check token budget
ACTIVATION_TOKENS=$(sklab info --json | jq '.tokens.activation')
if [ $ACTIVATION_TOKENS -gt 2000 ]; then
  echo "Warning: Skill exceeds token budget"
fi

Output Fields

Structure Detection

The command automatically detects skill structure:
  • scripts/ - Contains executable scripts
  • references/ - Contains reference documentation
  • assets/ - Contains images or other assets

Token Estimates

Discovery tokens: Estimated tokens for skill name + description
  • Used during skill discovery phase
  • Affects how many skills can be loaded in agent context
Activation tokens: Estimated tokens for full SKILL.md content
  • Used when skill is activated
  • Includes frontmatter, body, and all content
Token estimates use a simplified algorithm and may differ from actual token counts by specific models. Use them as approximations for budgeting.

JSON Output Format

{
  "name": "string",
  "description": "string",
  "license": "string",
  "compatibility": "string",
  "structure": ["array", "of", "folders"],
  "body_lines": 0,
  "tokens": {
    "discovery": 0,
    "activation": 0
  }
}

Exit Codes

  • 0: Info retrieved successfully
  • 1: Error occurred (invalid path, missing SKILL.md, unknown field)

Use Cases

Skill Budget Monitoring

Check if skills fit within token budgets:
sklab info --field tokens

CI/CD Integration

Validate skill metadata in automated workflows:
# GitHub Actions example
steps:
  - name: Check skill tokens
    run: |
      TOKENS=$(sklab info --json | jq '.tokens.activation')
      if [ $TOKENS -gt 3000 ]; then
        echo "Error: Skill exceeds token budget"
        exit 1
      fi

Skill Catalog Generation

Generate skill catalogs from metadata:
for dir in skills/*/; do
  sklab info "$dir" --json >> catalog.json
done

Notes

Token estimates are based on approximate character-to-token ratios. For precise token counts, use the actual model’s tokenizer.

Build docs developers (and LLMs) love