Skip to main content
The sklab prompt command exports one or more skills into prompt formats suitable for agent platforms. This enables you to inject skill metadata (name, description, location) into agent system prompts for dynamic skill discovery.

Overview

Skill Lab supports 3 output formats:
FormatUse CaseExample Platform
XMLStructured tags for ClaudeClaude API, Claude Code
MarkdownHuman-readable documentationLangChain, OpenAI Assistants
JSONMachine-readable metadataCustom agents, API integrations
XML is the default and recommended format for Claude-based agents.

Basic Usage

Export Single Skill

# XML format (default)
sklab prompt ./my-skill

# Markdown format
sklab prompt ./my-skill --format markdown

# JSON format
sklab prompt ./my-skill --format json
Output goes to stdout for easy piping. Token estimates are printed to stderr.

Export Multiple Skills

# Export all skills in a directory
sklab prompt ./skill-a ./skill-b ./skill-c

# Pipe to file
sklab prompt ./skill-* > skills-prompt.xml

Export Current Directory

# Export current directory as a skill
sklab prompt

Output Formats

XML (Default)

Best for: Claude API, Claude Code, structured system prompts
sklab prompt ./skill-a ./skill-b
<available_skills>
<skill>
<name>run-python-tests</name>
<description>Execute pytest test suite for Python projects. Discovers and runs all tests in tests/ directory, generates coverage reports, and displays results.</description>
<location>/path/to/.claude/skills/run-python-tests</location>
</skill>
<skill>
<name>analyze-code-quality</name>
<description>Run static analysis tools (ruff, mypy, pylint) on Python codebases and generate quality reports.</description>
<location>/path/to/.claude/skills/analyze-code-quality</location>
</skill>
</available_skills>
Usage in Claude system prompt:
<system>
You are a helpful coding assistant.

The following skills are available:

<available_skills>
<skill>
<name>run-python-tests</name>
<description>Execute pytest test suite for Python projects.</description>
<location>file:///path/to/.claude/skills/run-python-tests</location>
</skill>
</available_skills>

When the user's request matches a skill, invoke it by loading the SKILL.md from the location.
</system>

Markdown

Best for: Human-readable documentation, LangChain prompts, OpenAI Assistants
sklab prompt ./skill-a ./skill-b --format markdown
## Available Skills

### run-python-tests
**Description:** Execute pytest test suite for Python projects. Discovers and runs all tests in tests/ directory, generates coverage reports, and displays results.
**Location:** `/path/to/.claude/skills/run-python-tests`

### analyze-code-quality
**Description:** Run static analysis tools (ruff, mypy, pylint) on Python codebases and generate quality reports.
**Location:** `/path/to/.claude/skills/analyze-code-quality`

JSON

Best for: Machine-readable metadata, API integrations, custom skill loaders
sklab prompt ./skill-a ./skill-b --format json
{
  "available_skills": [
    {
      "name": "run-python-tests",
      "description": "Execute pytest test suite for Python projects. Discovers and runs all tests in tests/ directory, generates coverage reports, and displays results.",
      "location": "/path/to/.claude/skills/run-python-tests"
    },
    {
      "name": "analyze-code-quality",
      "description": "Run static analysis tools (ruff, mypy, pylint) on Python codebases and generate quality reports.",
      "location": "/path/to/.claude/skills/analyze-code-quality"
    }
  ]
}

Token Estimates

Skill Lab estimates discovery tokens (name + description) for all exported skills:
sklab prompt ./skill-a ./skill-b
Output (to stderr):
# 2 skill(s), ~87 discovery tokens
This represents the token cost of injecting skill metadata into a system prompt.
Discovery tokens are what agents see when choosing skills. Activation tokens (full SKILL.md) are only loaded when the skill is invoked.Use sklab info ./my-skill to see both discovery and activation token estimates.

Integration Examples

Claude API System Prompt

import subprocess
import anthropic

# Generate skill prompt
result = subprocess.run(
    ["sklab", "prompt", "./my-skill", "--format", "xml"],
    capture_output=True,
    text=True
)
skills_xml = result.stdout

# Inject into system prompt
system_prompt = f"""
You are a helpful coding assistant.

The following skills are available:

{skills_xml}

When the user's request matches a skill, invoke the skill by reading SKILL.md from the location.
"""

client = anthropic.Anthropic()
message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    system=system_prompt,
    messages=[{"role": "user", "content": "Run the tests"}]
)

LangChain with Markdown

import subprocess
from langchain.prompts import PromptTemplate

# Generate skill prompt (Markdown)
result = subprocess.run(
    ["sklab", "prompt", "./skill-a", "./skill-b", "-f", "markdown"],
    capture_output=True,
    text=True
)
skills_md = result.stdout

# Create LangChain prompt template
template = f"""
You are a helpful assistant. The following skills are available:

{skills_md}

User request: {{user_input}}

If the request matches a skill, use that skill. Otherwise, respond normally.
"""

prompt = PromptTemplate(
    input_variables=["user_input"],
    template=template
)

Custom Skill Loader (JSON)

import subprocess
import json
from pathlib import Path

def load_skills_metadata():
    """Load skill metadata from sklab prompt command."""
    result = subprocess.run(
        ["sklab", "prompt", "./skills/*", "--format", "json"],
        capture_output=True,
        text=True
    )
    data = json.loads(result.stdout)
    return data["available_skills"]

def find_skill_by_name(name: str):
    """Find skill location by name."""
    skills = load_skills_metadata()
    for skill in skills:
        if skill["name"] == name:
            return Path(skill["location"])
    return None

def load_skill_content(name: str) -> str:
    """Load full SKILL.md content for a skill."""
    skill_path = find_skill_by_name(name)
    if skill_path:
        skill_md = skill_path / "SKILL.md"
        return skill_md.read_text(encoding="utf-8")
    raise ValueError(f"Skill not found: {name}")

# Usage
skill_content = load_skill_content("run-python-tests")
print(skill_content)

Dynamic Skill Discovery

#!/bin/bash
# generate-agent-prompt.sh

# Discover all skills in .claude/skills/
SKILLS=$(find .claude/skills -mindepth 1 -maxdepth 1 -type d)

# Export as XML
sklab prompt $SKILLS > agent-skills.xml

echo "Exported $(echo $SKILLS | wc -w) skills to agent-skills.xml"

Use Cases

1. Multi-Agent Systems

Export skills for multiple specialized agents:
# Agent 1: Testing specialist
sklab prompt ./testing-skills/* > testing-agent-skills.xml

# Agent 2: Deployment specialist  
sklab prompt ./deployment-skills/* > deploy-agent-skills.xml

# Agent 3: Generalist
sklab prompt ./all-skills/* > general-agent-skills.xml

2. Skill Marketplace

Generate a JSON catalog of available skills:
sklab prompt ./marketplace/skills/* --format json > skill-catalog.json

3. Documentation Generation

Create human-readable skill documentation:
sklab prompt ./project-skills/* --format markdown > SKILLS.md

4. CI/CD Skill Injection

# .github/workflows/deploy-agent.yml
name: Deploy Agent with Skills

on: [push]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      
      - name: Install sklab
        run: pip install skill-lab
      
      - name: Export skills
        run: sklab prompt ./.claude/skills/* > skills.xml
      
      - name: Deploy agent
        run: ./deploy-agent.sh --skills skills.xml

Token Budgeting

Discovery Cost (System Prompt)

Each skill adds ~40-100 tokens to the system prompt (name + description).
sklab prompt ./skill-a ./skill-b ./skill-c
# Output: 3 skill(s), ~210 discovery tokens
Example: 10 skills = ~800 discovery tokens

Activation Cost (Full SKILL.md)

When a skill is invoked, the agent loads the full SKILL.md (~200-2000 tokens). Use sklab info to see activation costs:
sklab info ./my-skill
╭─────────────────── my-skill ───────────────────╮
│ Description: Execute pytest test suite        │
│ License:     MIT                               │
│                                                │
│ Structure:   scripts/ references/              │
│ Body:        47 lines                          │
│                                                │
│ Tokens (estimated):                            │
│   Discovery:   ~42 tokens (name + description) │
│   Activation:  ~487 tokens (full SKILL.md)     │
╰────────────────────────────────────────────────╯
Exporting too many skills (50+) can bloat the system prompt and degrade agent performance. Consider skill categorization or lazy loading for large skill libraries.

Format Comparison

FeatureXMLMarkdownJSON
Human-readable⚠️⚠️
Claude-optimized
Machine-parseable
Compact⚠️⚠️
Documentation
API integration
Recommendations:
  • Claude agents: Use XML
  • Documentation: Use Markdown
  • Programmatic access: Use JSON

Next Steps

Skill Inspector

View detailed skill metadata and token estimates with sklab info

Static Analysis

Validate skill quality before exporting

Build docs developers (and LLMs) love