Overview
Tank’s security scanning pipeline analyzes skill packages for:- Prompt injection attacks
- Credential exposure
- Code execution risks
- Supply chain vulnerabilities
- Obfuscation patterns
- Data exfiltration attempts
Scan Pipeline
The security scanner runs 6 stages:Stage 0: Ingestion & Quarantine
- Downloads tarball from signed URL
- Extracts to isolated temp directory
- Computes SHA-256 hash for each file
- Validates tarball structure
- Rejects encrypted archives
- Rejects suspicious file paths
Stage 1: File & Structure Validation
- Verifies required files (
skills.json) - Enforces file count limit (<1000 files)
- Enforces size limit (<50MB)
- Blocks binary executables
- Blocks compiled Python (
.pyc,.pyo) - Validates file extensions
Stage 2: Static Code Analysis
Largest stage with 550+ lines of analysis:- Python AST analysis
- JavaScript pattern matching
- Dangerous function calls (
eval,exec,compile) - File system operations (suspicious paths)
- Network operations (exfiltration patterns)
- Subprocess spawning
- Dynamic code generation
- Obfuscation detection
- Base64-encoded payloads
- Suspicious imports
Stage 3: Prompt Injection Detection
- Prompt injection patterns
- System prompt extraction attempts
- Role confusion attacks
- Instruction override attempts
- Multi-turn attack patterns
- Unicode homoglyphs
- Hidden instructions
- Chain-of-thought manipulation
Stage 4: Secrets & Credential Scanning
- API keys (various formats)
- AWS credentials
- GitHub tokens
- Private keys (RSA, SSH)
- Database connection strings
- Password patterns
- Generic secret patterns
Stage 5: Supply Chain Analysis
- Dependency analysis
- Known vulnerable packages
- Package typosquatting detection
- Dependency confusion attacks
- Malicious package detection
- Unpinned dependencies
- Deprecated packages
Full Scan Endpoint
Endpoint:POST /api/analyze/scan
Runs the complete 6-stage pipeline. Called automatically by /api/v1/skills/confirm.
Request
Signed URL to download the tarball
UUID of the skill version being scanned
Skill manifest (skills.json content)
Declared permissions from manifest
Response
UUID of the scan result record (null if storage failed)
Final verdict:
pass, pass_with_notes, flagged, or failDeduplicated security findings
Per-stage execution details
Total scan duration in milliseconds
SHA-256 hashes for each file (path → hash)
Lightweight Security Check
Endpoint:POST /api/analyze/security
Fast pattern-matching security check without full tarball download. Useful for pre-publish validation.
Request
File content to analyze
Filename for location reporting (optional)
Response
true if no issues foundDetected security issues
Human-readable summary
Analysis method (always
static_analysis)Verdict Rules
Verdicts are computed based on finding severity:| Condition | Verdict | Can Publish? |
|---|---|---|
| 1+ critical findings | fail | No |
| 4+ high findings | fail | No |
| 1-3 high findings | flagged | Requires manual review |
| Medium/low only | pass_with_notes | Yes (with warnings) |
| No findings | pass | Yes |
Audit Score
The audit score (0-10) is computed from 8 weighted checks:- SKILL.md present (1 pt) — manifest name non-empty
- Description present (1 pt) — manifest description non-empty
- Permissions declared (1 pt) — permissions object not empty
- No security issues (2 pts) — no critical/high findings
- Permission extraction match (2 pts) — extracted ⊆ declared
- File count reasonable (1 pt) — fewer than 100 files
- README documentation (1 pt) — readme field non-empty
- Package size reasonable (1 pt) — tarball under 5 MB
Scan Performance
- Average: 3-5 seconds for typical skills
- Timeout: 55 seconds (Vercel function limit)
- Parallelization: Stages run sequentially (dependency chain)
- Budget-aware: Skips later stages if timeout approaching
Finding Deduplication
Multiple tools may detect the same issue. The scanner:- Groups findings by
(type, location) - Keeps highest confidence score
- Boosts confidence for corroborated findings
Stored Results
Scan results are stored in PostgreSQL:- scan_results table: Verdict, counts, duration, file hashes
- scan_findings table: Individual findings with details
Error Handling
Graceful Degradation
If a stage errors:- Stage marked as
errored(notfailed) - Remaining stages continue
- Audit score computed without failed stage data
auditStatusset toscan-failedif scan crashes
Stage Errors
Next Steps
Skills API
Publish skills with automatic scanning
Audit Score
Learn how audit scores are computed