Skip to main content

audit-website Skill

The audit-website skill enables AI agents to perform comprehensive website audits using the squirrelscan CLI. It checks 230+ rules across 21 categories including SEO, performance, security, accessibility, and content quality.

What This Skill Does

This skill gives agents the ability to:
  • Audit websites against 230+ rules in 21 categories
  • Generate LLM-optimized reports with health scores and recommendations
  • Detect broken links (internal and external)
  • Analyze meta tags and structured data
  • Identify technical issues like redirect chains and page speed problems
  • Check security for leaked secrets, HTTPS usage, and security headers
  • Validate accessibility including alt text and color contrast
  • Compare audits to detect regressions over time

Audit Categories

The skill covers 21 comprehensive categories:
  • Meta tags, titles, descriptions
  • Canonical URLs
  • Open Graph tags
  • Keyword optimization
  • Content structure
  • Broken links
  • Redirect chains
  • Page speed
  • Mobile-friendliness
  • Crawlability
  • Page load time
  • Resource usage
  • Caching strategies
  • Image optimization
  • Leaked secrets
  • HTTPS usage
  • Security headers
  • Mixed content
  • Heading structure (H1-H6)
  • Image alt text
  • Content analysis
  • E-E-A-T signals
  • Accessibility
  • Usability
  • Mobile optimization
  • Schema markup
  • Legal compliance
  • Social media tags
  • Local SEO
  • Video optimization

When to Use This Skill

Use the audit-website skill when you need to:
  • Analyze overall website health
  • Debug technical SEO issues
  • Find and fix broken links
  • Validate meta tags and structured data
  • Generate comprehensive audit reports
  • Compare site health before and after changes
  • Improve performance, accessibility, or security
  • Prepare for a product launch
  • Monitor website quality over time
Re-audit frequently to ensure your website remains healthy. The skill is designed for iterative improvement.

Installation

Prerequisites

This skill requires the squirrel CLI to be installed and accessible in your PATH.
The skill will not work without squirrel installed. Follow the installation steps below.

macOS and Linux Installation

1

Install squirrel

Run the installation script:
curl -fsSL https://squirrelscan.com/install | bash
This will:
  • Download the latest binary
  • Install to ~/.local/share/squirrel/releases/{version}/
  • Create a symlink at ~/.local/bin/squirrel
  • Initialize settings at ~/.squirrel/settings.json
2

Add to PATH

If ~/.local/bin is not in your PATH, add it:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"

# Reload your shell
source ~/.bashrc  # or source ~/.zshrc
3

Verify Installation

Check that squirrel is working:
squirrel --version
You should see version information printed.

Windows Installation

1

Run PowerShell Installer

Open PowerShell and run:
irm https://squirrelscan.com/install.ps1 | iex
This will:
  • Download the latest Windows binary
  • Install to %LOCALAPPDATA%\squirrel\
  • Add squirrel to your PATH
2

Restart Terminal

Restart your terminal for PATH changes to take effect
3

Verify Installation

Check that squirrel is working:
squirrel --version

Configuration

Project Setup

Before auditing, initialize a squirrel project:
# Initialize with a project name
squirrel init --project-name my-website

# Or use short form
squirrel init -n my-website

# Force overwrite existing config
squirrel init -n my-website --force
This creates a squirrel.toml configuration file.
The project name identifies the audit database. All audits for a project share the same database, enabling comparison over time.

Configuration File

The squirrel.toml file controls audit behavior:
squirrel.toml
[project]
name = "my-website"

[crawl]
max_pages = 200
coverage = "surface"  # quick | surface | full

[audit]
min_score = 85

Project Database

Audits are stored in: ~/.squirrel/projects/<project-name> This enables:
  • Historical comparison
  • Regression detection
  • Report regeneration in different formats

Usage

Basic Audit Workflow

The standard audit process:
# Quick surface scan (100 pages)
squirrel audit https://example.com --format llm
Always use --format llm for AI agent workflows. This format is optimized for token efficiency and provides exhaustive, structured output.

Coverage Modes

Choose the right coverage for your needs:
ModePagesBehaviorUse Case
quick25Seed + sitemaps onlyCI checks, fast monitoring
surface100One sample per URL patternGeneral audits (default)
full500Crawl everythingDeep analysis, pre-launch
Surface mode intelligence: It detects URL patterns like /blog/{slug} and only crawls one sample per pattern. This makes it efficient for sites with many similar pages.

Output Formats

The skill supports multiple formats:
# LLM-optimized (best for agents)
squirrel audit https://example.com --format llm

# Human-readable console
squirrel audit https://example.com --format console

# JSON for processing
squirrel audit https://example.com --format json

# HTML report
squirrel audit https://example.com --format html --output report.html
Agents should always use --format llm. It’s 40% more compact than XML and optimized for AI consumption.

Workflow: Audit → Fix → Re-Audit

The skill follows a systematic improvement workflow:
1

Initial Audit

Run a surface scan to identify issues:
squirrel audit https://example.com --format llm
This generates:
  • Overall health score (0-100)
  • Category breakdowns
  • Specific issues with affected URLs
  • Actionable recommendations
2

Fix Issues

Address all critical errors and warnings:
  • Code fixes: Meta tags, structured data, templates
  • Content fixes: Alt text, headings, descriptions
  • Technical fixes: Broken links, redirects, performance
Don’t stop after code fixes. Content changes (*.md, *.mdx) are equally important.
3

Re-Audit

Verify improvements with a fresh audit:
squirrel audit https://example.com --refresh --format llm
The --refresh flag ignores cache to ensure accurate results.
4

Iterate Until Complete

Continue fixing and re-auditing until:
  • Score reaches target (typically 85-95+)
  • Only issues requiring human judgment remain
A site is only considered complete and fixed when scores are above 95 (Grade A) with --coverage full.

Score Targets

Set improvement goals based on starting score:
Starting ScoreTarget ScoreExpected Work
< 50 (Grade F)75+ (Grade C)Major fixes required
50-70 (Grade D)85+ (Grade B)Moderate fixes needed
70-85 (Grade C)90+ (Grade A)Polish and refinement
> 85 (Grade B+)95+ (Grade A+)Fine-tuning
Don’t stop until the target is reached.

Parallelizing Fixes with Subagents

The skill leverages subagents to fix issues in parallel, dramatically reducing completion time.

When to Parallelize

Parallelize when:
  • 5+ files need the same fix type
  • Fixes have no dependencies on each other
  • Files are independent (not importing each other)

Common Parallelizable Fixes

Issue TypeParallelizableApproach
Image alt text✅ YesSpawn subagents per file batch
Heading hierarchy✅ YesSpawn subagents per file batch
Short descriptions✅ YesSpawn subagents per file batch
HTTP→HTTPS links✅ YesBulk sed/replace
Meta tags/titles❌ NoShared components
Structured data❌ NoSingle source of truth
Broken links❌ NoRequires manual review

Parallel Execution Pattern

Multiple Task tool calls in ONE message = parallel execution. Sequential calls = slower.
Example: Fixing alt text in 12 files Spawn 3 subagents in a single message:
Fix missing image alt text in these files:
- content/blog/post-1.md
- content/blog/post-2.md
- content/blog/post-3.md
- content/blog/post-4.md

Find images without alt text (![](path) or <img without alt=).
Add descriptive alt text based on image filename and context.
Do not ask for confirmation.

Batch Sizing Guidelines

  • Optimal: 3-5 files per subagent
  • Maximum: 10 files per subagent
  • Total agents: Spawn 2-4 subagents for parallel work

Subagent Prompt Structure

Effective subagent prompts are:
  1. Focused - Specific file list
  2. Clear - Exact pattern to find
  3. Actionable - Precise fix instructions
  4. Autonomous - “Do not ask for confirmation”
Fix [issue type] in the following files:
- path/to/file1.md
- path/to/file2.md

Pattern: [what to find]
Fix: [what to change]

Do not ask for confirmation. Make all changes and report what was fixed.

Advanced Options

Custom Page Limits

# Audit more pages than default
squirrel audit https://example.com --max-pages 200

# Maximum allowed
squirrel audit https://example.com --max-pages 5000

Force Fresh Crawl

# Ignore cache, fetch all pages fresh
squirrel audit https://example.com --refresh

Resume Interrupted Crawl

# Continue from where it stopped
squirrel audit https://example.com --resume

Debugging

# Verbose output
squirrel audit https://example.com --verbose

# Debug logging
squirrel audit https://example.com --debug

# Performance tracing
squirrel audit https://example.com --trace

Regression Detection

Compare audits to detect regressions:
# Compare against a baseline audit ID
squirrel report --diff <audit-id> --format llm

# Compare against a baseline domain
squirrel report --regression-since example.com --format llm
Diff mode works with: console, text, json, llm, markdown
Diff mode does not support html or xml formats.

Complete Workflow Example

A real-world audit and fix workflow:
1

Setup

# Initialize project
cd ~/projects/my-saas
squirrel init -n my-saas
2

Initial Audit

# Surface audit of production site
squirrel audit https://my-saas.com --format llm
Results:
  • Score: 68/100 (Grade D)
  • 43 errors, 89 warnings
  • Categories: SEO issues, missing alt text, heading problems
3

Plan Fixes

Identify parallelizable work:
  • 12 files missing alt text → 3 subagents
  • 18 files with heading hierarchy issues → 3 subagents
  • 7 files with short descriptions → 1 subagent
  • Meta tag fixes → Main agent (shared components)
4

Execute Fixes

Spawn subagents in parallel for content fixes. Main agent handles shared component updates.
5

Re-Audit

squirrel audit https://my-saas.com --refresh --format llm
Results:
  • Score: 87/100 (Grade B)
  • 4 errors, 12 warnings
6

Final Round

Fix remaining issues.
squirrel audit https://my-saas.com --refresh --format llm
Results:
  • Score: 96/100 (Grade A+)
  • 0 errors, 2 warnings (require human judgment)

Completion Criteria

A site audit is complete when:
  • ✅ All errors fixed
  • ✅ All warnings fixed or documented as requiring human review
  • ✅ Re-audit confirms improvements
  • ✅ Before/after comparison shown
  • ✅ Score above 95 with full coverage
Don’t stop early. Fix ALL issues, not just the obvious ones. Continue until score targets are met.

Troubleshooting

Command Not Found

Problem: squirrel: command not found Solution:
# 1. Install squirrel
curl -fsSL https://squirrelscan.com/install | bash

# 2. Add to PATH
export PATH="$HOME/.local/bin:$PATH"

# 3. Verify
squirrel --version

Permission Denied

Problem: Permission error when running squirrel Solution:
chmod +x ~/.local/bin/squirrel

Invalid URL

Problem: Audit fails with URL error Solution: Include protocol in URL
# ✗ Wrong
squirrel audit example.com

# ✓ Correct
squirrel audit https://example.com

Slow Performance

Problem: Audit takes too long Solution: Use quick coverage or reduce max pages
# Fast health check
squirrel audit https://example.com --coverage quick --format llm

# Or limit pages
squirrel audit https://example.com --max-pages 50 --format llm

Additional Resources

squirrelscan Docs

Full documentation including rule references

Rule Reference

Detailed explanations of all 230+ rules

CLI Help

Run squirrel audit --help for command reference

Website

squirrelscan homepage and updates

Rule Documentation

Look up specific rules:
https://docs.squirrelscan.com/rules/{category}/{rule_id}
Example:
https://docs.squirrelscan.com/rules/links/external-links

Report Commands

# List recent audits
squirrel report --list

# Generate report from audit ID
squirrel report <audit-id> --format llm

# Filter by severity
squirrel report <audit-id> --severity error

# Filter by categories
squirrel report <audit-id> --category seo,performance

Config Commands

# Show current config
squirrel config show

# Set config value
squirrel config set project.name my-site

# Show config file path
squirrel config path

# Validate config
squirrel config validate

Self-Management Commands

# Update squirrel
squirrel self update

# Health check
squirrel self doctor

# Version info
squirrel self version

# Shell completions
squirrel self completion

Best Practices

  1. Always use LLM format for agent workflows
  2. Start with surface audits before full scans
  3. Re-audit after fixes to verify improvements
  4. Parallelize content fixes using subagents
  5. Don’t stop early - reach score targets
  6. Audit frequently to catch issues early
  7. Test before deployment by auditing staging/preview environments

Build docs developers (and LLMs) love