Thank you for your interest in contributing to Skill Lab! This guide will help you get started with development and understand our contribution process.
Development Setup
Prerequisites
Before you begin, ensure you have:
- Python 3.10 or higher
- Git
- A text editor or IDE
Installation
Clone the repository
git clone https://github.com/8ddieHu0314/Skill-Lab.git
cd Skill-Lab
Create a virtual environment
python -m venv venv
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
Install in editable mode with dev dependencies
This installs:
- Core dependencies (typer, rich, pyyaml)
- Dev tools (pytest, pytest-cov, mypy, ruff)
- Optional dependencies (anthropic for LLM-based test generation)
Verify installation
sklab --help
pytest tests/ -v
Development Workflow
Running Tests
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=skill_lab --cov-report=html
# Run specific test file
pytest tests/test_checks.py -v
# Run specific test
pytest tests/test_checks.py::test_skill_md_exists -v
Coverage reports are generated in htmlcov/index.html when using --cov-report=html
Code Quality
Before submitting a PR, ensure your code passes all quality checks:
# Type checking (strict mode enabled)
mypy src/
# Linting
ruff check src/
# Format check
ruff format --check src/
# Auto-format
ruff format src/
All CI checks must pass before your PR can be merged. Run these commands locally to catch issues early.
Code Style Guidelines
- Type hints: Use type hints for all function parameters and return values
- Union syntax: Use Python 3.10+ union syntax (
T | None instead of Optional[T])
- Naming: Follow PEP 8 naming conventions
- Function size: Keep functions focused and small
- Docstrings: Add docstrings for public functions and classes
- Line length: Maximum 100 characters
- Immutability: Use
@dataclass(frozen=True) for data models
Adding New Features
Adding a New Static Check
Skill Lab supports two types of static checks:
1. Behavioral Checks (Hand-written Classes)
For complex validation logic, create a new check class:
Create your check class
Add your check to the appropriate file in src/skill_lab/checks/static/:
structure.py - File structure and organization
schema.py - Frontmatter field validation
naming.py - Naming conventions
content.py - Content quality and requirements
from skill_lab.checks.base import StaticCheck
from skill_lab.core.registry import register_check
from skill_lab.core.models import Severity, EvalDimension, CheckResult, Skill
from typing import ClassVar
@register_check
class MyNewCheck(StaticCheck):
check_id: ClassVar[str] = "category.my-check"
check_name: ClassVar[str] = "My Check"
description: ClassVar[str] = "Description of what this check does"
severity: ClassVar[Severity] = Severity.WARNING
dimension: ClassVar[EvalDimension] = EvalDimension.CONTENT
spec_required: ClassVar[bool] = False # True if required by Agent Skills spec
def run(self, skill: Skill) -> CheckResult:
# Use helper method to check for metadata
if result := self._require_metadata(skill, "perform this check"):
return result
# Your validation logic here
if some_condition:
return self._pass("Check passed")
return self._fail(
"Check failed",
details={"reason": "specific details"},
location=self._skill_md_location(skill)
)
Add tests
Add test cases to tests/test_checks.py:def test_my_new_check():
# Test passing case
skill = create_test_skill_with_valid_data()
result = MyNewCheck().run(skill)
assert result.passed
# Test failing case
skill = create_test_skill_with_invalid_data()
result = MyNewCheck().run(skill)
assert not result.passed
Update documentation
Document your new check in the appropriate documentation files.
2. Schema-Based Checks (Declarative)
For simple field validation, add a FieldRule to FRONTMATTER_SCHEMA in src/skill_lab/checks/static/schema.py:
FieldRule(
field_name="my_field",
required=True,
expected_type="str",
max_length=100,
pattern=r"^[a-z-]+$",
check_id="schema.my-field-valid",
check_name="My Field Valid",
description="Validates my_field format",
)
Schema-based checks are automatically registered. No need to write a class or use @register_check.
Adding a New Trace Check Handler
For trace analysis checks:
from skill_lab.tracechecks.registry import register_trace_handler
from skill_lab.tracechecks.handlers.base import TraceCheckHandler
@register_trace_handler("my_check_type")
class MyTraceHandler(TraceCheckHandler):
def run(self, check, analyzer, project_dir) -> TraceCheckResult:
# Validate required fields
pattern = self._require_field(check, "pattern")
if isinstance(pattern, TraceCheckResult):
return pattern
# Your analysis logic
if analyzer.some_condition(pattern):
return self._pass(check, "Check passed")
return self._fail(check, "Check failed")
Pull Request Process
Create a branch
git checkout -b feature/my-feature
Use descriptive branch names:
feature/ - New features
fix/ - Bug fixes
docs/ - Documentation updates
refactor/ - Code refactoring
Make your changes
- Follow the existing code style
- Add tests for new functionality
- Update documentation as needed
- Keep commits focused and atomic
Run checks locally
pytest tests/ -v
mypy src/
ruff check src/
ruff format src/
Commit your changes
git add .
git commit -m "Add feature: description"
Write clear commit messages:
- Use present tense (“Add feature” not “Added feature”)
- Be concise but descriptive
- Reference issues when applicable
Push and create PR
git push origin feature/my-feature
Then create a Pull Request on GitHub with:
- Clear title describing the change
- Description of what changed and why
- Any relevant issue numbers
- Screenshots or examples if applicable
Testing Conventions
Fixture Structure
Test fixtures are located in tests/fixtures/skills/. Each subdirectory represents a mock skill with a SKILL.md file.
tests/fixtures/skills/
├── valid-skill/
│ └── SKILL.md
├── missing-frontmatter/
│ └── SKILL.md
└── invalid-yaml/
└── SKILL.md
Testing Different Check Types
# Schema-based checks: use registry lookup
def test_schema_check():
check_class = _get_check("schema.name-exists")
result = check_class().run(skill)
# Behavioral checks: import directly
from skill_lab.checks.static.naming import NameMatchesDirectoryCheck
def test_naming_check():
result = NameMatchesDirectoryCheck().run(skill)
Test Organization
Test files mirror the source structure:
test_checks.py - Static check tests
test_parsers.py - Parser tests
test_evaluator.py - Evaluator tests
test_cli.py - CLI command tests
test_triggers.py - Trigger testing
test_trace_evaluator.py - Trace analysis tests
Reporting Issues
When reporting issues, please include:
- Environment details: OS, Python version, sklab version
- Reproduction steps: Clear steps to reproduce the issue
- Expected behavior: What you expected to happen
- Actual behavior: What actually happened
- Error messages: Full error messages and stack traces
- Sample files: Minimal SKILL.md that demonstrates the issue (if applicable)
Questions and Support
Feel free to:
- Open an issue for questions or discussions
- Check existing issues and pull requests
- Review the Architecture documentation for technical details
License
By contributing to Skill Lab, you agree that your contributions will be licensed under the MIT License.