Loading…
Loading…
The upcoming W3C accessibility guidelines revision introducing a new conformance scoring model.
stellae.design
WCAG 3.0, codenamed 'Silver' (from the AG working group's Silver Task Force), is the next major evolution of web accessibility guidelines. Unlike WCAG 2.x's binary pass/fail criteria, WCAG 3.0 proposes a scoring model that rates conformance on a scale, grouped by functional outcomes rather than technical principles. It aims to address limitations of WCAG 2.x: better coverage of cognitive disabilities, mobile apps, emerging technologies (XR, voice interfaces), and a more flexible conformance model. The structure uses Guidelines (broad goals), Outcomes (testable results), and Methods (technology-specific tests). As of 2024-2025, it remains a Working Draft — not suitable for legal or policy reference — but its concepts inform forward-thinking accessibility strategy.
WCAG 3.0, codenamed "Silver," represents the most fundamental rethinking of web accessibility guidelines since WCAG 2.0 launched in 2008, moving from a binary pass/fail conformance model to a graduated scoring system that better reflects real-world user experience. The new framework addresses long-standing criticisms of WCAG 2.x — including its technical complexity, narrow focus on web content, and inability to capture the full spectrum of accessibility quality — by introducing outcomes-based testing, broader technology coverage, and a scoring model that rewards incremental improvement. While WCAG 3.0 is still a Working Draft and years from becoming a formal recommendation, understanding its direction now allows teams to align their accessibility strategies with where the standard is heading rather than being caught off guard by a paradigm shift.
Under WCAG 3.0's proposed model, a website that provides good color contrast on 90 percent of its pages but has issues on a few legacy templates would receive a high score reflecting its overall quality, rather than failing entirely as it would under WCAG 2.x's binary system. This graduated approach incentivizes organizations to invest in continuous improvement rather than viewing accessibility as an all-or-nothing threshold that is too expensive to achieve. The scoring model also makes it easier for procurement officers to differentiate between vendors making genuine effort and those making none.
A WCAG 3.0-style evaluation of a mobile banking application tests whether users with various disabilities can successfully complete core tasks — checking balances, transferring funds, paying bills — rather than auditing individual technical criteria in isolation. The outcome-based approach reveals that while the app passes most WCAG 2.1 technical checks, users with cognitive disabilities struggle to complete transfers because the multi-step flow provides insufficient context and unclear progress indicators. This user-centered evaluation method catches real accessibility barriers that technical audits miss.
An organization's leadership decides to delay all accessibility investment until WCAG 3.0 is finalized, arguing that building to WCAG 2.1 would be wasted effort if the standard changes. Meanwhile, the years-long W3C standardization process continues, the product accumulates accessibility debt with every release, and the organization faces both legal risk under current regulations and a growing remediation burden. When WCAG 3.0 eventually publishes, the team discovers that most WCAG 2.1 AA practices remain foundational to the new standard and they could have been building on solid ground all along.
• The most dangerous mistake is using WCAG 3.0's draft status as a reason to delay accessibility work — WCAG 2.1 AA remains the current legal and industry benchmark, and the vast majority of its requirements will carry forward into 3.0 in some form. Teams also mistakenly assume that the graduated scoring model means lower standards; in reality, WCAG 3.0 is expected to raise the bar by expanding scope to cognitive accessibility, emerging technologies, and outcome-based evaluation while simply providing a more nuanced way to measure conformance. Another error is treating early Working Drafts as reliable implementation guides — the specification is evolving substantially between drafts, and building tooling or processes around draft-specific details risks rework when the final standard diverges.
Was this article helpful?