Introduction
Manual accessibility testing is essential but doesn't scale. Every PR, every page change, every new component could introduce a regression. Automated accessibility testing in CI catches the low-hanging fruit automatically — missing alt text, broken ARIA, insufficient contrast, and invalid HTML.
Automation won't catch everything (keyboard traps, confusing reading order, poor alt text quality), but it creates a safety net that prevents the most common issues from reaching production. Combined with periodic manual audits, it keeps your accessibility baseline high.
This guide covers setting up axe-core with Playwright, Lighthouse CI, ESLint, and GitHub Actions for comprehensive automated accessibility testing.
Key Concepts
The Automation Pyramid
// 1. Lint time (fastest, most limited)
// ESLint jsx-a11y — catches static code issues
// Runs: every keystroke in IDE, every commit in CI
// 2. Unit test time
// jest-axe — checks rendered component HTML
// Runs: npm test in CI
// 3. Integration test time (most thorough automated)
// axe-core + Playwright — checks full rendered pages
// Tests interactions, dynamic state, real CSS
// Runs: CI pipeline after build
// 4. Audit time (most comprehensive)
// Lighthouse CI — full page audits with scores
// Runs: CI pipeline, reports to dashboard
axe-core + Playwright Setup
// Install
// npm install -D @axe-core/playwright @playwright/test
// tests/a11y.spec.ts
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
const pages = ['/', '/about', '/blog', '/contact', '/pricing'];
for (const path of pages) {
test(`${path} has no a11y violations`, async ({ page }) => {
await page.goto(path);
const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag22aa'])
.analyze();
expect(results.violations).toEqual([]);
});
}
Practical Examples
1. GitHub Actions Workflow
# .github/workflows/accessibility.yml
name: Accessibility
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci
- run: npx eslint --ext .tsx,.ts src/ --rule 'jsx-a11y/alt-text: error'
axe-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci
- run: npx playwright install --with-deps
- run: npm run build
- run: npm run start &
- run: npx wait-on http://localhost:3000
- run: npx playwright test tests/a11y
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci && npm run build
- run: npm run start &
- run: npx wait-on http://localhost:3000
- run: npx @lhci/cli autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_TOKEN }}
2. Lighthouse CI Configuration
// lighthouserc.js
module.exports = {
ci: {
collect: {
url: [
'http://localhost:3000/',
'http://localhost:3000/blog',
'http://localhost:3000/contact',
],
numberOfRuns: 3,
},
assert: {
assertions: {
'categories:accessibility': ['error', { minScore: 0.95 }],
'color-contrast': 'error',
'image-alt': 'error',
'heading-order': 'warn',
'link-name': 'error',
},
},
upload: {
target: 'temporary-public-storage',
},
},
};
3. Storybook + axe in CI
// Run Storybook a11y checks in CI with test-runner
// npm install -D @storybook/test-runner axe-playwright
// .storybook/test-runner.ts
import { injectAxe, checkA11y } from 'axe-playwright';
export default {
async preVisit(page) {
await injectAxe(page);
},
async postVisit(page) {
await checkA11y(page, '#storybook-root', {
detailedReport: true,
detailedReportOptions: { html: true },
});
},
};
// package.json
// "test:storybook": "test-storybook --ci"
// CI: Build Storybook, serve it, run tests
// npx storybook build
// npx http-server storybook-static &
// npx test-storybook --url http://localhost:6006
4. Custom Reporting
// Generate accessibility report as PR comment
import AxeBuilder from '@axe-core/playwright';
async function generateReport(page, url) {
await page.goto(url);
const results = await new AxeBuilder({ page }).analyze();
return {
url,
violations: results.violations.length,
passes: results.passes.length,
details: results.violations.map(v => ({
id: v.id,
impact: v.impact,
description: v.description,
nodes: v.nodes.length,
})),
};
}
// Format as markdown for PR comment
function formatReport(reports) {
let md = '## ♿ Accessibility Report\n\n';
for (const r of reports) {
const status = r.violations === 0 ? '✅' : '❌';
md += `${status} **${r.url}** — ${r.violations} violations, ${r.passes} passes\n`;
for (const v of r.details) {
md += ` - [${v.impact}] ${v.description} (${v.nodes} elements)\n`;
}
}
return md;
}
Best Practices
- ✅ Run ESLint jsx-a11y on every PR — fastest feedback loop
- ✅ Run axe-core via Playwright on key pages after build
- ✅ Set Lighthouse a11y score threshold at 95+ and fail CI below it
- ✅ Test pages after interactions (form submission, modal open, tab switch)
- ✅ Report violations as PR comments for visibility
- ❌ Don't skip pages — automate tests for all public routes
- ❌ Don't treat a11y CI as a replacement for manual testing — it's a complement
Common Pitfalls
- Only testing the homepage — accessibility issues are often on secondary pages
- Not testing after JavaScript interactions — SPAs may have post-render violations
- Setting the threshold too low — 70% passes most issues through; aim for 95%+
- Not investigating 'incomplete' results from axe — they often indicate real problems
Related Guides
- Accessibility Testing Tools — The tools behind the automation
- Next.js Deployment Strategies — Integrating a11y into your CI/CD pipeline
- WCAG Practical Guide — What the automation is checking against
- Screen Reader Testing — Manual testing to complement automation