Skip to main content

Running audits

Learn how to run comprehensive website audits with squirrel, from quick health checks to deep analysis.
All audit results are stored in a local database at ~/.squirrel/projects/<project-name>, allowing you to generate multiple report formats from a single crawl.

Basic audit command

The simplest way to audit a website:
squirrel audit https://example.com
This command:
  • Crawls up to 100 pages (default surface mode)
  • Analyzes against 230+ rules
  • Displays console output with colored formatting
  • Saves results to the local database
Always include the protocol (https:// or http://) in the URL.

Quick start examples

squirrel audit https://example.com

Coverage modes explained

Choose the scan depth that matches your needs:

Quick mode

25 pages - Fastest scan for rapid health checks
squirrel audit https://example.com --coverage quick
# or use the alias
squirrel audit https://example.com -C quick

How it works

  • Crawls seed URLs only (homepage, sitemap entries)
  • No link discovery - doesn’t follow internal links
  • Focuses on primary pages and templates

Best for

CI/CD pipelines

Fast checks on every deployment

Health monitoring

Daily status checks

Rapid feedback

Quick validation during development
Quick mode may miss issues on pages not in sitemaps or seed URLs.

Coverage mode comparison

ModePagesLink DiscoveryUse CaseSpeed
quick25NoCI checks, monitoring⚡⚡⚡ Fastest
surface100Yes (sampled)General audits⚡⚡ Fast
full500Yes (complete)Deep analysis⚡ Thorough

Format options

Choose the output format that fits your workflow:
Compact XML/text hybrid optimized for AI agents:
squirrel audit https://example.com --format llm

Features

  • 40% more compact than verbose XML
  • Token-efficient structure
  • Includes summary, issues, broken links, and recommendations
  • Organized by category for easy parsing

Output structure

<audit>
  <summary>
    <score>78</score>
    <grade>C</grade>
    ...
  </summary>
  <issues>
    <category name="Core SEO">
      <issue rule="missing-meta-description" severity="warning" ...>
    </category>
  </issues>
  <broken-links>...</broken-links>
  <recommendations>...</recommendations>
</audit>

Format comparison

FormatBest ForInteractiveFile Size
llmAI agents, automationNoSmall
consoleTerminal viewingNoN/A
jsonIntegration, APIsNoMedium
htmlStakeholders, sharingYesLarge
markdownDocumentationNoSmall

Advanced options

Refresh: Force fresh crawl

Ignore cached results and fetch all pages fresh:
squirrel audit https://example.com --refresh
# or use the alias
squirrel audit https://example.com -r

When to use

  • After making significant changes to the site
  • When you suspect cached data is stale
  • For comparing before/after deployment
Fresh crawls take longer and may impact the target website’s server load.
Resume a crawl that was stopped or interrupted:
squirrel audit https://example.com --resume

When to use

  • Network interruption during crawl
  • Timeout on large sites
  • Manually stopped audit
The crawler tracks progress in the database, allowing it to pick up where it left off.
Crawl more or fewer pages than the default:
# Audit up to 200 pages
squirrel audit https://example.com --max-pages 200

# Using alias
squirrel audit https://example.com -m 200

# Maximum is 5000
squirrel audit https://example.com -m 5000

Page limit by coverage

Default limits can be overridden:
  • Quick: 25 pages (can increase to 100)
  • Surface: 100 pages (can increase to 500)
  • Full: 500 pages (can increase to 5000)
Display detailed progress and debugging information:
squirrel audit https://example.com --verbose
# or use the alias
squirrel audit https://example.com -v

Output includes

  • URLs being crawled
  • Rule execution details
  • Timing information
  • Cache hit/miss status
Use verbose mode when diagnosing issues or monitoring large crawls.
Enable debug-level logging for troubleshooting:
squirrel audit https://example.com --debug
Outputs extensive internal information including HTTP requests, rule evaluations, and database operations.
Enable performance tracing:
squirrel audit https://example.com --trace
Tracks timing and resource usage for performance optimization.
Use a different project name for this audit:
squirrel audit https://example.com --project-name temp-audit
# or use alias
squirrel audit https://example.com -n temp-audit
This stores the audit in ~/.squirrel/projects/temp-audit/ instead of the configured project.

Combining options

Advanced options can be combined:
# Full fresh audit with verbose output
squirrel audit https://example.com \
  --coverage full \
  --max-pages 1000 \
  --refresh \
  --format llm \
  --verbose

Two-step workflow

For maximum efficiency, separate crawling from report generation:
1

Run the audit

Execute the audit once and save to database:
squirrel audit https://example.com --coverage full
The output will include an audit ID (e.g., a1b2c3d4).
2

Generate multiple reports

Export the same audit in different formats without re-crawling:
# LLM format for AI analysis
squirrel report a1b2c3d4 --format llm

# JSON for integration
squirrel report a1b2c3d4 --format json -o report.json

# HTML for stakeholders
squirrel report a1b2c3d4 --format html -o report.html

# Markdown for documentation
squirrel report a1b2c3d4 --format markdown -o report.md
This approach saves time and bandwidth - crawl once, export many times.

Finding audit IDs

List recent audits to find IDs:
squirrel report --list
# or use alias
squirrel report -l
Output:
Recent audits:
  a1b2c3d4 - example.com - 2024-10-11 10:30:00 - Score: 78
  e5f6g7h8 - example.com - 2024-10-10 15:45:00 - Score: 72

Filtering reports

Generate focused reports by filtering results:
Show only errors or warnings:
# Only errors
squirrel report a1b2c3d4 --severity error --format llm

# Errors and warnings
squirrel report a1b2c3d4 --severity warning --format llm

# Everything (default)
squirrel report a1b2c3d4 --severity all --format llm

Regression detection

Compare audits to detect regressions:
Compare current state against a baseline audit:
squirrel report --diff a1b2c3d4 --format llm
Where a1b2c3d4 is the baseline audit ID.
Diff mode supports console, text, json, llm, and markdown formats. HTML and XML are not supported for diffs.

Diff output

The diff report highlights:
  • New issues that weren’t present in the baseline
  • Fixed issues that were resolved
  • Score changes (improved or degraded)
  • Category changes

Common workflows

Fast validation during development:
squirrel audit https://localhost:3000 \
  --coverage quick \
  --format console
Automated checks on deployment:
# Run audit
squirrel audit https://staging.example.com \
  --coverage quick \
  --format json \
  -o audit-results.json

# Parse results in CI script
SCORE=$(jq '.score' audit-results.json)
if [ $SCORE -lt 85 ]; then
  echo "Audit failed: score below threshold"
  exit 1
fi
Comprehensive validation before going live:
# Full audit with all details
squirrel audit https://staging.example.com \
  --coverage full \
  --max-pages 1000 \
  --refresh \
  --format llm

# Generate stakeholder report
squirrel report <audit-id> --format html -o launch-audit.html
Track site health over time:
# Weekly audit
squirrel audit https://example.com \
  --coverage surface \
  --format llm \
  -o "audit-$(date +%Y-%m-%d).txt"

# Compare with last week
squirrel report --diff <last-week-audit-id> --format markdown

Best practices

Start with surface

Begin with surface mode to get fast feedback on site structure and templates

Use LLM format for automation

The LLM format is optimized for AI agents and automated workflows

Save audit IDs

Keep track of audit IDs for historical comparisons and regression detection

Refresh before comparisons

Use --refresh when comparing before/after changes to ensure fresh data

Next steps

Audit categories

Learn about all 21 audit categories and their rules

Interpret results

Understand health scores, severity levels, and prioritization

Build docs developers (and LLMs) love