Skip to main content

CWAC scans government websites for accessibility issues. The results help agencies improve accessibility, inform guidance, and may drive leaderboards — though automated testing has clear limits.

About CWAC

The Centralised Web Accessibility Checker (CWAC) is an automated testing tool developed by the Government Chief Digital Officer (GCDO).

CWAC helps find accessibility issues on web pages, particularly failures to meet the Web Content Accessibility Guidelines (WCAG) 2.2, which form the basis of the NZ Government Web Accessibility Standard.

Monitoring programme

In , the GCDO’s Web Standards team started using CWAC to test the publicly facing websites of government agencies directed by Cabinet to meet the Web Standards, plus those from a few other agencies that asked to participate.

The websites are to be scanned every 3 months, and the results shared with agencies and the public via Data.govt.nz. The results are also being used to inform the types of support and guidance the Web Standards team provides.

CWAC monitoring programme

Automated testing: strengths and limitations

What it’s good for

Automated testing is a cost-effective way to detect certain types of accessibility issues. It’s easy to integrate early and throughout the digital product lifecycle, helping designers and developers catch issues when they arise. Over time, it also teaches them about accessibility and how to avoid such issues in the future.

Limitations

Automated tools can only detect issues they’re programmed to find. They cannot:

  • find all accessibility failures on a website
  • measure WCAG conformance.

Manual assessment by humans is always required for a complete accessibility evaluation.

If the tool raises false positives — that is, it identifies errors that are not actually errors, a website could have fewer errors than are counted by the tool.

Not all errors found by automated tools have the same impact on users. A website with hundreds of minor colour contrast violations might not be as inaccessible as a website with just a few errors preventing people from submitting a form.

These limitations of automated testing tools mean we need to be careful how we interpret and use the results they generate.

Automated testing as an indicator of practice and maturity

While automated test results cannot be used to rank websites by accessibility or WCAG conformance, they can offer insight into how agencies are using automated testing tools. Regular use of automated testing suggests an agency is taking steps to integrate accessibility into its workflows, which is a sign of digital maturity.

A website with few or no automatically detectable errors likely indicates that an agency is actively using automated testing tools and fixing issues. A website with lots of errors suggests the agency is not taking advantage of automated testing.

It’s easy and inexpensive to integrate automated accessibility testing into the digital lifecycle. By using integrated automated accessibility testing, there is little reason web pages should be published with any automatically detectable accessibility errors.

CWAC scores and leaderboards

Since the beginning of the CWAC monitoring programme, we have wanted to publicly rank government agencies based on their websites’ results. However, given the limitations of automated accessibility testing, we want to be clear about what CWAC measures and ensure that the results are representative of what agencies are doing.

Purpose of the CWAC leaderboard

Publishing agencies’ CWAC results in a leaderboard can motivate them to adopt better accessibility practices. Leaders generally want their agencies to rank near the top of a leaderboard — or at least not the bottom. When performance is made public, leaders are more likely to act to improve their agency’s standing relative to others.

Publishing CWAC data also:

By combining the CWAC results for all websites from an agency, we can calculate a score for that agency that indicates how well it is using automated accessibility testing tools.

Incomplete test results are not representative

The first official CWAC scan on attempted to scan 517 websites across 46 agencies.

Unfortunately, some sites blocked CWAC. More than 10% of the websites had zero pages scanned. In some cases, fewer than half of an agency’s websites were scanned.

While the scan identified real issues and provided useful data for agencies and the Web Standards team, the incomplete coverage meant the results were not suitable for comparing agencies.

Next steps

Since June, improvements to the software and greater collaboration with agencies has reduced the number of sites blocking CWAC. In a recent test, nearly all websites (more than 98%) were scanned.

If this level of coverage is maintained, the next official scan in late September is expected to provide a representative dataset. If so, we plan to publish a leaderboard that reflects, not the accessibility of agencies’ websites, but agencies’ use of automated accessibility testing tools.

Utility links and page information

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.