The role of automated accessibility testing

Paul Boag

Many tools on the market automate the process of checking for website accessibility. However, there are some serious question marks over the value of such tools.

There are many different reasons why automated checkers have limited value. However, the forthcoming Government Guidelines on Accessibility provide a very neat summary:

"…automated tools are like spell checkers – they look for obvious problems within a web page, and then generate a list of possible problems. They cannot give a straightforward statement of whether your website meets certain accessibility standards. The list of possible problems needs to be interpreted by an experienced person and matched against what your site is actually doing. There is a substantial list of accessibility issues (at least 50%) that cannot be assessed by current automatic tools…"

Subjective decision making

The Government Guidelines on Accessibility show us that automated checking alone cannot be trusted. Computers are great at answering questions with yes/no answer. They are not so good at making subjective decisions. So for example a computer can easily tell you if an image has an associated alt tag but it cannot tell you if that alt tag is really descriptive of the image or not. As is stated above at least 50% of the WAI guidelines require subjective decision-making and so require a manual check.

It is this need for manual checking that undermines the primary, timesaving benefits of automated tools. An automated checker can scan a page and give it the "all clear" but you still need to visit that page to ensure it conforms to the subjective checkpoints.

Can even the automated checks be trusted?

It is also important to question the reliability of checks made by automated tools. I believe that practically all of the checks made by accessibility checkers also need to be checked manually. This is because automated tools are based on certain assumptions. The algorithm that the tools uses to assess a website are entirely dependent on the developers own interpretation of guidelines which are often entirely subjective.

When an automated tool flags up an error, it is the developer’s interpretation of the checkpoint that is being tested, and not necessarily the checkpoint itself. It is important when using automated testing tools to have an informed opinion on all web accessibility issues in order to be able to verify results.

Some accessibility issues are not covered by WCAG guidelines
It is possible to create a website that complies with WCAG guidelines and still presents accessibility barriers. A site that has text that is not fixed in size but scales between "1pt" and "4pt" technically meets Web Content Accessibility Guidelines. It will incidentally pass through most automated testing tools. Yet it would not only make the site inaccessible to disabled people, it would be inaccessible to most people. Ironically, the only people likely to be able to use the site without altering their browser settings would be screen reader users who would not be affected by text size. So while measuring accessibility using WCAG guidelines is undeniably the best starting point, there is more to accessibility than a list of checkboxes.

There is still a place for automated testing

So is there no place for automated tools? Well personally, I cannot bring myself to claim they are redundant. After all, my first tentative step into the world of accessibility was to use Bobby. If it had not been for that automated checker I could well have been put off by the intimidating WAI checkpoints. Surely, if all you do is check your site using an automated tool then this is still better than doing nothing at all. The danger is that you never move beyond that and recognise that web accessibility is a much more complex and subjective than a set of automated checkpoints.

My thanks to Ian Dunmore of Public Sector Forums and Grant Broome from the Shaw Trust for their contribution to this post.