runSkill() function is the main entry point for executing Warden analysis. It runs a skill definition against an event context, analyzing each code hunk with Claude and aggregating the findings.
Function Signature
The skill to execute. Load from disk using
resolveSkillAsync():Event context containing repository info and code changes. Build using
buildEventContext():Execution options (see below)
SkillRunnerOptions
Control skill execution behavior:Anthropic API key. Falls back to
WARDEN_ANTHROPIC_API_KEY env var. If not provided, uses Claude Code subscription (requires claude login).Model ID to use (e.g.,
'claude-sonnet-4-20250514'). Defaults to SDK’s latest Sonnet model.Maximum agentic turns (API round-trips) per hunk analysis.
Lines of context to include around each hunk.
Process files in parallel. Set to
false for sequential processing.Max concurrent file analyses when
parallel=true.Delay in milliseconds between batch starts for rate limiting.
Path to
claude CLI. Required in CI environments where the CLI isn’t in PATH.Abort controller for cancellation (e.g., on SIGINT).
Progress callbacks for UI updates:
Retry configuration for transient API failures:
Max retries for auxiliary Haiku calls (extraction repair, merging, fix evaluation).
Return Value
Returns aSkillReport with findings, usage stats, and metadata:
Skill name
Auto-generated summary (e.g., “security-review: Found 3 issues (2 high, 1 medium)”)
Array of findings. Each finding has:
id: Short unique identifierseverity:'high'|'medium'|'low'confidence:'high'|'medium'|'low'(optional)title: Short descriptiondescription: Detailed explanationlocation: File path and line range (optional)suggestedFix: Diff patch (optional)verification: How the issue was verified (optional)
Token usage and cost:
inputTokens: Input tokens (non-cached portion)outputTokens: Generated tokenscacheReadInputTokens: Cache hitscacheCreationInputTokens: Cache writescostUSD: Total cost in USD
Total execution time in milliseconds
Model used for analysis
Per-file breakdown of findings, timing, and usage
Files skipped due to chunking patterns
Number of hunks that failed to analyze (SDK errors, API errors)
Number of hunks where findings extraction failed
Usage from auxiliary Haiku calls, keyed by agent name:
extraction: JSON repairmerge: Cross-location mergingfix_gate: Suggested fix quality checks
Example: Basic Usage
Example: With Progress Callbacks
Example: With Abort Controller
analyzeFile()
For finer control, analyze a single file’s hunks:Error Handling
runSkill() throws SkillRunnerError on failures:
Performance Tips
Parallel Processing
By default, files are analyzed in parallel with concurrency=5:Rate Limiting
Add delays between batches to respect API rate limits:Context Lines
Reduce context lines for faster analysis of large diffs:Related Functions
buildEventContext()- Create event context for analysisrenderSkillReport()- Format findings for outputmatchTrigger()- Filter skills by trigger rules