What Happens in a Website Accessibility Audit: Process, Findings and Next Steps

Accessibility audit process icon

Most organisations know their website should be accessible. Fewer know what an accessibility audit involves, what it tests for or what the findings look like when the work is done. The process can feel opaque from the outside, which makes it harder to budget for, plan around or act on the results. If you’re considering website accessibility auditing for organisations of any size, understanding the process from start to finish removes much of that uncertainty. An audit is not a pass-or-fail test. It is a structured assessment that identifies specific barriers, explains why they matter and gives you a clear path toward fixing them.

The accessibility audit process typically moves through several distinct phases. Each one serves a different purpose, from defining what gets tested through to agreeing how issues should be prioritised. Some phases are heavily automated. Others rely entirely on human judgement. The strongest audits combine the two, because neither approach catches everything on its own.

Defining the Scope Before Testing Begins

Every audit starts with scope. A website with 20 pages and a website with 2,000 pages require very different approaches. Scoping determines which pages, templates and user journeys will be tested. Getting this right at the outset prevents wasted effort and makes sure the most important parts of the site receive proper attention.

For most organisations, the scope focuses on a representative sample of page templates rather than every individual page. A WordPress site built on five or six templates will share the same accessibility characteristics across every page that uses each template. Testing one instance of a service page, one blog post, one landing page and the homepage gives auditors a reliable picture of template-level issues that affect the entire site. Key user journeys also get included in the scope. Contact forms, search functionality, navigation menus, cookie consent interactions and any checkout or booking flow need testing because these are the points where users are most likely to encounter barriers.

Scoping also means agreeing on the conformance target. Most UK audits test against WCAG 2.2 Level AA, which is the standard referenced by current UK regulations. Public sector bodies are legally required to meet this level under the Public Sector Bodies (Accessibility Regulations) 2018. Private sector organisations operating under the Equality Act 2010 face a broader obligation to make reasonable adjustments. WCAG 2.2 AA is widely accepted as the benchmark for what reasonable looks like in a digital context.

Automated Testing: The First Pass

Once the scope is agreed, the audit moves into testing. Automated scanning is typically the first stage. Tools like axe-core, WAVE and Lighthouse run rule-based checks against the HTML of each page in the sample. These tools are fast. They can scan a full-page template in seconds and identify structural violations that would take a human auditor considerably longer to find manually.

Automated scanners are good at catching a specific category of issue. Missing alternative text on images, form inputs without associated labels, heading levels that skip from H2 to H4, colour contrast ratios that fall below the minimum thresholds set by WCAG, missing language attributes on the HTML element, links with no discernible text. These are problems that can be expressed as deterministic rules. Automated tools check them reliably.

Audit Phase What Gets Tested Tools or Methods Used
Scope definition Page templates, user journeys, conformance level Stakeholder discussions, sitemap review, analytics data
Automated scanning HTML structure, contrast ratios, missing labels and alt text, heading hierarchy axe-core, WAVE, Lighthouse, site-wide crawlers
Manual code review ARIA usage, semantic markup, focus order, reading order, link context Browser dev tools, manual inspection of source code
Assistive technology testing Screen reader compatibility, keyboard navigation, voice control JAWS, NVDA, VoiceOver, keyboard-only testing
Reporting and prioritisation Issue severity, WCAG mapping, remediation guidance Structured report with screenshots and code examples

The limitation of automated tools is well documented. The WebAIM Million report, which analyses the home pages of the top one million websites each year, consistently shows that automated testing catches only a portion of WCAG failures. Many accessibility problems require judgement about context, meaning and user experience. Whether a piece of alternative text accurately describes an image, whether link text makes sense out of context, whether a custom interactive component behaves the way a keyboard user would expect. These questions sit outside what rule-based scanning can answer, which is why automated testing is a starting point rather than the full picture.

Manual Testing: Where Human Judgement Takes Over

Manual testing checklist icon

Manual testing is the stage where a trained auditor works through each page in the sample using a browser, keyboard and assistive technologies. This is the most time-consuming part of the audit, but it is also where the most meaningful findings tend to surface. Automated tools tell you what is technically wrong with the code. Manual testing tells you whether the site works for someone who relies on accessibility features.

Keyboard-only testing comes first. The auditor puts the mouse aside and navigates the entire page using only the Tab key, Shift+Tab, Enter and arrow keys. They’re checking for several things at once:

  • Whether every interactive element receives a visible focus indicator
  • Whether the focus order follows a logical sequence that matches the visual layout
  • Whether any content traps the keyboard so the user cannot move past it
  • Whether skip links allow users to bypass repetitive navigation blocks
  • Whether custom components like menus, accordions and tabs respond to expected keyboard patterns

Keyboard traps are one of the most serious accessibility failures because they prevent a user from reaching any content beyond the trapped element. These often appear in modal dialogues, embedded media players and custom dropdown menus that have been built without proper focus management.

Screen reader testing follows. The auditor activates a screen reader, typically JAWS or NVDA on Windows alongside VoiceOver on macOS. They listen to how each page is announced. This reveals problems that are invisible in a visual browser. A sighted user might see a clear layout with a sidebar, a main content area and a footer. A screen reader user hears the page as a single linear stream of content. If the reading order does not match the visual order, the experience becomes confusing or unusable. Screen reader testing also exposes issues with ARIA attributes, where developers have added accessibility markup that conflicts with the native semantics of the HTML elements or announces incorrect roles and states to the user.

The auditor also checks forms, error handling and interactive components in detail. Can a screen reader user complete a contact form and understand the validation messages that appear when a field is left empty? Does a date picker work with keyboard controls alone? Are error messages associated programmatically with the fields they relate to? Do they appear visually near the field but remain disconnected in the code? These are the kinds of barriers that frustrate users most, because they stop people from completing the tasks they came to the site to perform.

What the Audit Report Looks Like

The output of an accessibility audit is a detailed report that documents every issue found, maps it to the relevant WCAG success criterion and provides guidance on how to fix it. A well-structured report is not just a list of problems. It gives developers and content editors the information they need to address each issue without having to interpret the WCAG specification themselves.

Each finding in the report typically includes the WCAG criterion that has been violated, the severity of the issue, a description of the problem in plain language, one or more screenshots showing the issue in context, the page or template where the issue was found and a recommended fix. Severity ratings vary between auditors, but most use a scale that distinguishes between issues that completely block access to content, issues that cause significant difficulty and issues that create minor inconvenience. This distinction matters because it drives the order in which fixes get made.

  1. Critical issues that prevent users from accessing content or completing tasks are flagged for immediate attention. A contact form that cannot be submitted using a keyboard, for example, blocks an entire user journey.
  2. Serious issues that cause significant difficulty are scheduled for the next development sprint. Missing heading structure across a template would fall into this category because it affects navigation for screen reader users on every page that uses the template.
  3. Moderate issues that create friction but do not block access are addressed in subsequent rounds. Low colour contrast on secondary text might fall here, depending on the context.
  4. Minor issues that represent best practice improvements are documented for ongoing consideration. These might include suggestions for more descriptive link text or recommendations around content readability.

The report will also include a summary of overall conformance. This is not a score out of 100. It is a statement of how many WCAG success criteria at the target level (typically AA) the site currently meets, partially meets or fails. Organisations that need to publish an accessibility statement can use this information to describe their current position accurately and outline their plans for remediation.

Prioritising What to Fix First

A thorough audit will produce a long list of findings. Trying to fix everything at once is rarely practical. Prioritisation is where the audit process moves from assessment into action planning. The audit report provides severity ratings, but turning those ratings into a realistic remediation schedule requires decisions about resources, timelines and business impact.

Template-level fixes should sit near the top of any prioritisation list. A single accessibility issue in a page template can affect hundreds of pages. Fixing the template once removes the barrier across every page that uses it. This is particularly relevant for sites built on WordPress, where themes and page builders create repeating patterns. WordPress development that follows accessibility standards from the start avoids many of these template-level problems, but older sites or sites built with less accessible themes will often have issues embedded in the template layer.

User journey fixes come next. Any barrier that sits within a path a user takes to accomplish a goal, such as finding information, making contact, booking a service or completing a purchase, deserves priority over barriers in less-trafficked areas. Analytics data can help here. If 80% of your site traffic lands on five page types, those five types should receive attention before pages that see minimal visits.

Content fixes, such as missing alt text on images or poorly written link text, can often be addressed by content editors without developer involvement. Training content teams to write accessible content prevents these issues from recurring after the initial fixes are made. A good audit report will flag content-level issues separately from code-level issues so they can be assigned to the right team.

The Remediation Phase

Fixing accessibility issues requires collaboration between designers, developers and content editors. The web design process often needs revisiting when an audit reveals problems with colour choices, interactive patterns or layout assumptions that create barriers. Developers handle code-level changes to HTML structure, ARIA attributes, focus management and form behaviour. Content editors update alternative text, link text and document structure.

Some fixes are straightforward. Adding a missing form label, correcting a heading hierarchy or increasing colour contrast can often be done in minutes. Others are more involved. Rebuilding a custom dropdown menu to support keyboard navigation properly, retrofitting focus management into a modal dialogue or restructuring a complex data table so it conveys meaning through its markup as well as its visual layout all take careful development time.

The GDS accessibility blog has published practical guidance on remediation approaches that UK organisations can follow. Their advice consistently points to fixing the underlying HTML rather than adding layers of JavaScript or ARIA on top of inaccessible markup. Native HTML elements like <button>, <select> and <input> come with built-in keyboard support and screen reader compatibility. Replacing custom-built components with native equivalents is often the simplest and most reliable fix.

Priority Pixels approaches remediation as part of the broader web development process rather than treating it as a separate workstream. Fixes get integrated into existing sprint cycles, tested against the original audit findings and verified before deployment.

After the Audit: Maintaining Accessibility Over Time

Ongoing fixes and maintenance icon

An audit captures a snapshot of a website’s accessibility at a single point in time. Websites change constantly. New pages get published, designs get updated, plugins get installed, third-party scripts get added. Each of these changes can introduce new accessibility barriers if the teams making the changes are not aware of accessibility requirements. Treating an audit as a one-off exercise rather than the start of an ongoing process is the most common mistake organisations make.

Regular automated scanning should run on a scheduled basis, ideally weekly. Tools like the Equalize Digital Accessibility Checker integrate directly into the WordPress content editing experience and flag issues as content is created, which prevents many problems from reaching the live site at all. Combining editor-level checking with site-wide automated scans creates a safety net that catches regressions quickly.

Periodic manual audits should happen at least annually, more frequently if the site undergoes significant redesign or redevelopment work. Each manual audit should test against the current version of WCAG, which at the time of writing is 2.2. The W3C continues to develop WCAG 3.0, which will eventually replace the 2.x series, though no firm timeline for its completion has been published.

Training is an often-overlooked part of post-audit improvement. Content editors who understand how to write proper alternative text, create meaningful heading structures and use semantic formatting will produce accessible content as a matter of course. Developers who understand keyboard interaction patterns and ARIA usage will write more accessible code from the start. The cost of training is far lower than the cost of repeated audits that keep finding the same kinds of issues because the teams creating content and code have not been given the knowledge to avoid them.

The most effective approach to accessibility is treating it as a quality standard that applies to every stage of content creation and development, not as a remediation task that follows a failed audit.

Accessibility is a sustained practice, not a project with a fixed end date. Organisations that build accessibility awareness into their content workflows, design processes and development standards find that each successive audit reveals fewer issues and those issues become less severe over time. That trajectory is the real measure of progress.

FAQs

What does the accessibility audit process involve from start to finish?

The accessibility audit process begins with defining the scope of pages and user journeys to test, then moves through automated scanning, manual testing with assistive technologies, and detailed reporting. The process combines both automated tools and human judgement to identify barriers, explain their impact, and provide clear guidance on how to fix them.

How long does a typical website accessibility audit take?

The timeline depends on your website’s size and complexity, with the scope definition being key to determining duration. A site with 5-6 page templates will take much less time than one requiring extensive custom component testing, as most audits focus on representative samples rather than every individual page.

What's the difference between automated and manual accessibility testing?

Automated tools quickly scan HTML code to catch structural violations like missing alt text, poor colour contrast, and incorrect heading hierarchies. Manual testing involves human auditors using keyboards and screen readers to assess whether the site actually works for people who rely on accessibility features, catching issues that require judgement about context and user experience.

What should I expect to receive in an accessibility audit report?

You’ll receive a detailed report documenting every issue found, mapped to relevant WCAG criteria with severity ratings and practical fix recommendations. Each finding includes screenshots, affected pages, plain-language descriptions, and specific guidance so your developers and content editors know exactly what needs changing without having to interpret technical specifications themselves.

How do I prioritise which accessibility issues to fix first?

Start with template-level fixes that affect multiple pages, then focus on barriers within key user journeys like contact forms or checkout processes. Use your analytics data to prioritise high-traffic areas, and separate content fixes (which editors can handle) from code-level changes that need developer attention.

Avatar for Paul Clapp Paul Clapp
Co-Founder at Priority Pixels

Paul leads on development and technical SEO at Priority Pixels, bringing over 20 years of experience in web and IT. He specialises in building fast, scalable WordPress websites and shaping SEO strategies that deliver long-term results. He’s also a driving force behind the agency’s push into accessibility and AI-driven optimisation.

Related Insights

Practical advice on B2B digital marketing, from lead generation and brand strategy to campaign performance.

The SEO and Microsoft Ads Connection: Why Running Both Gets Better Results
B2B Marketing Agency
Have a project in mind?

Every project starts with a conversation. Ready to have yours?

Start your project
Web Design Agency