Skip to main content
This guide provides practical workflows for common accessibility testing scenarios, from quick checks during development to comprehensive audits before deployment.

Common Use Cases

1. Quick Page Audit

When: During development, need fast feedback on a page Prompt:
Analyze the accessibility of https://my-site.com using all available tools
What happens:
  • Runs analyze-mixed (axe-core + Pa11y + Lighthouse in parallel)
  • Returns combined and deduplicated issues
  • Groups results by WCAG criterion
  • Includes enriched human context
Expected time: ~5-10 seconds Expected result:
  • Issues grouped by severity
  • WCAG criterion explanations
  • Lighthouse score (0-100)
  • Affected user groups identified
  • Priority and remediation effort

2. Deep Analysis with a Specific Tool

When: Need tool-specific metadata or faster results Prompt:
Use only axe-core to analyze https://my-site.com and give me a detailed report
What happens:
  • Runs analyze-with-axe only
  • Returns axe-specific metadata (impact, target selectors)
  • Faster than mixed analysis
Advantage: Faster execution, tool-specific confidence scores and metadata Alternative tools:
Use Pa11y to analyze https://my-site.com
Use Lighthouse to analyze https://my-site.com

3. Local or Development HTML Analysis

When: Testing HTML before it’s deployed, or testing components in isolation Prompt:
Check this HTML for accessibility issues:
<form>
  <input type="text" placeholder="Name">
  <button>Submit</button>
</form>
What happens:
  • Analyzes the HTML snippet directly
  • No need to deploy or run a local server
  • Works with analyze-mixed or any individual tool
Typical issues found:
  • Missing <label> associated with input (WCAG 1.3.1)
  • Button without explicit type="submit"
  • No form validation or error handling

4. Tool Comparison

When: Validating false positives or understanding tool differences Prompt:
Compare the results of axe-core and Pa11y on https://example.com
What differences do they find?
What happens:
  • Runs analyze-mixed with individualResults field
  • Shows unique issues found by each tool
  • Shows overlapping issues (higher confidence)
Useful for:
  • Validating false positives (if only one tool reports it)
  • Understanding tool coverage differences
  • Deciding which tool to use in CI/CD
  • Finding edge cases

5. Color Contrast Analysis

When: Design QA, validating color palettes, or focusing on visual accessibility Prompt:
Check if the text colors on https://my-site.com comply with WCAG AA
What happens:
  • Runs analyze-contrast with WCAG 2.1 algorithm
  • Calculates exact contrast ratios
  • Suggests compliant color alternatives
Expected result:
  • Current ratio vs required ratio
  • Color suggestions that pass WCAG
  • Statistics by text type (normal/large)
  • CSS fix suggestions
Example output:
Failing elements:
1. Body text (#888 on #fff): 3.2:1 ❌ (needs 4.5:1)
   Suggested fix: #666 (ratio 5.7:1 ✅)
   
2. Button text (#ccc on #fff): 2.1:1 ❌ (needs 4.5:1)
   Suggested fix: #555 (ratio 8.9:1 ✅)

6. APCA Analysis (WCAG 3.0 Draft)

When: Preparing for WCAG 3.0, or want perceptually accurate contrast Prompt:
Analyze the contrast of https://my-site.com using the APCA algorithm
What happens:
  • Runs analyze-contrast with contrastAlgorithm: "APCA"
  • Uses perceptual contrast (Lightness contrast - Lc)
  • Considers text polarity (dark-on-light vs light-on-dark)
Expected result:
  • Lightness contrast (Lc) instead of ratios
  • Thresholds: 75Lc (body text), 60Lc (large text), 45Lc (non-text)
  • More accurate for modern displays
  • Future-proof for WCAG 3.0
APCA is experimental (WCAG 3.0 draft). Use WCAG 2.1 for legal compliance.

Workflow 1: Pre-Deploy Check

Goal: Decide if a release is ready to deploy based on accessibility
1

Run full analysis

Use the pre-deploy-check prompt for https://staging.my-app.com
Or manually:
Analyze https://staging.my-app.com with all tools
2

Filter blocking issues

Show me only critical issues that would block deployment
Focus on:
  • Critical severity issues
  • WCAG Level A violations
  • Issues affecting core user flows
3

Make decision

If there are critical issues:
  • Block deploy
  • Create tasks to fix critical issues
  • Re-run check after fixes
If only medium/low issues:
  • Log issues in backlog
  • Proceed with deploy
  • Plan fixes for next sprint
4

Document score

Track Lighthouse accessibility score over time:
What is the current Lighthouse accessibility score?
Using MCP Prompt (recommended):
Use the pre-deploy-check prompt for https://staging.my-app.com
This provides a clear GO/NO-GO decision with blocking issues highlighted.

Workflow 2: Quick Wins Sprint Planning

Goal: Identify high-impact, low-effort fixes for the next sprint
1

Analyze production site

Analyze https://my-site.com and show me quick wins
2

Filter by priority and effort

Show me high or critical priority issues with low remediation effort
Or use the MCP prompt:
Use the quick-wins-report prompt for https://my-site.com
3

Create sprint tasks

For each issue, create tasks with:
  • Issue description
  • WCAG criterion
  • Time estimate (from remediationEffort)
  • Suggested fix from suggestedActions
  • Affected user groups
4

Track improvements

After sprint, re-run analysis:
Analyze https://my-site.com and compare with previous results
Expected outcome: Sprint backlog with 5-10 issues totaling 4-8 hours of work that significantly improve accessibility.

Workflow 3: Periodic Audit

Goal: Track accessibility over time and catch regressions
1

Schedule regular analysis

Run analysis every sprint or monthly:
Use the full-accessibility-audit prompt with:
- url: https://production.com
- wcagLevel: AA
2

Compare with baseline

Compare today's analysis with last month's results.
Have we introduced new issues?
Track:
  • Total issue count trend
  • Lighthouse score trend
  • New issues (regressions)
  • Fixed issues
3

Identify regressions

Show me issues that appeared since last analysis
Investigate:
  • What changed? (recent deployments)
  • Which component introduced the issue?
  • How to prevent similar regressions?
4

Prioritize fixes

Group issues for next sprint:
  • Regressions (fix first)
  • Critical/high priority
  • Quick wins
Automation: Set up CI/CD to run accessibility checks on every PR and track scores over time.

Workflow 4: Team Training

Goal: Educate team on accessibility through real examples
1

Analyze a page with varied issues

Analyze https://demo.com and give me a variety of accessibility issues
2

Review humanContext

For each issue type, review:
  • The humanContext field (WCAG explanation)
  • Real-world impact examples
  • Affected user groups
  • Why it matters
3

Deep dive on specific criteria

Use the explain-wcag-criterion prompt with criterion: 1.1.1
Or:
Explain the WCAG 1.1.1 issue (Non-text content) found on our site:
- Which users it affects
- Real example of how it impacts them
- How to fix it step by step
4

Apply suggested solutions

Walk through the suggestedActions with code examples:
Show me before/after code examples for the alt text issue
5

Test with assistive technology

After learning theory, practice with:
  • Screen reader (NVDA, VoiceOver)
  • Keyboard-only navigation
  • Browser zoom to 200%
  • Color blindness simulator
Training topics to cover:
  • Semantic HTML and landmarks
  • Keyboard navigation and focus management
  • ARIA attributes (when and when not to use)
  • Color contrast and visual design
  • Form labels and error handling
  • Alternative text for images

Workflow 5: Focused Contrast Review

Goal: Validate and fix color contrast for specific sections or components
1

Analyze specific section

Use the contrast-check prompt with:
- url: https://my-site.com
- selector: .hero-section
- algorithm: WCAG21
- wcagLevel: AAA
Or with CSS selector:
Check contrast only in the header of https://my-site.com
2

Review failing elements

Each result includes:
  • Current colors (foreground/background)
  • Current ratio
  • Required ratio for WCAG level
  • Suggested color fixes
3

Apply fixes

Use suggested colors:
/* Before */
.button {
  color: #999;  /* 2.8:1 - fails */
  background: #fff;
}

/* After (from suggestion) */
.button {
  color: #666;  /* 5.7:1 - passes AA */
  background: #fff;
}
4

Re-verify

Re-check the contrast after applying fixes
5

Update design system

Document approved color combinations:
  • Text on backgrounds
  • Interactive element colors
  • Status indicators
  • Charts and data visualizations
For design systems, consider checking contrast for all color combinations upfront and documenting approved pairings.

Workflow 6: Lighthouse Score Improvement

Goal: Improve Lighthouse accessibility score to target threshold
1

Get baseline score

Use the lighthouse-audit prompt for https://my-site.com
Current score: 73/100 Target: 90/100
2

Identify high-impact issues

Show me which Lighthouse audits are failing and their impact on the score
Focus on audits with high weightings.
3

Create phased improvement plan

Use the lighthouse-score-improvement prompt with:
- url: https://my-site.com
- targetScore: 90
This generates a phased plan prioritized by score impact.
4

Implement phase 1 fixes

Start with highest-impact issues:
  • Color contrast (high weight)
  • ARIA usage (medium weight)
  • Form labels (high weight)
5

Measure improvement

Run Lighthouse analysis again. What's the new score?
Track: 73 → 85 (+12 points)
6

Iterate until target reached

Continue with phase 2, phase 3, etc. until target score is achieved.

Workflow Decision Tree

Effective Prompts

Tips for writing better accessibility prompts

Interpreting Results

How to prioritize and act on findings

Prompts Reference

Available MCP prompt templates

Tools Reference

Detailed tool documentation

Build docs developers (and LLMs) love