Skip to main content

Remaining challenges

White paper: conducting user evaluations with people with disabilities

"One man's meat is another man's poison."
This document has focused on identifying sources of difficulty for users when interacting with a product. However, identifying the sources is only the first stage. The next step is to take action to re-design the product to remove or remediate those difficulties. If you have involved users with different types of impairments in the user sessions, then you may find that some user requirements complement those of other users, e.g. large text often helps users with low vision as well as some users with reading difficulties. However, some user requirements will conflict with those of other users, e.g. large text on a mobile telephone or PDA for low vision users will increase the memory demands on the user as less information can be displayed at the same time, which is not good for someone with Parkinson's disease. These conflicts are easier to identify if you have evaluated the products with diverse impairment types. However, if you have only evaluated the product with the one impairment type (e.g. low vision), then bear in mind that these conflicts between impairment types will probably still exist even if you have not observed them directly.

What are the weaknesses with this work to date?
One of the biggest challenges of accessibility is how to validate that the software, Web pages, or hardware are truly accessible and usable. It would be great if there was one tool that could be used to validate that the product is accessible, but every tool has its limitations and the truth is that a lot of times, manual testing and human judgment come into play. Also, regulations by governments, standards groups recommendations, and the technologies are constantly evolving. For example, IBM's aDesigner is a disability simulator that helps Web designers ensure that their pages are accessible and usable by the visually impaired. But there are few evaluation tools that address both usability and accessibility. The best recourse is to know your customers, design products that are useable for them, and continue to work with them to ensure your product is as accessible and usable as possible.

What research still needs to be done?

Methods of helping identify 'showstoppers'.
When working with PwD, you should expect the first few sessions to reveal a number of showstopper problems - i.e. those from which the user cannot recover. Frequently these showstoppers originate from traditional usability problems or from conflicts with the assistive technology being used. While the purpose of the user evaluation sessions is to identify the sources of difficulty, having multiple users all stop at the same point and be unable to proceed is obviously not a very useful use of them as a resource. It would be better if there was some way of identifying all the obvious showstoppers up front before beginning the expensive user sessions. Research is being performed on simulating the more obvious and easily understood impairments so that designers can identify the showstoppers earlier in the design process. The complementary approach to this is to develop educational techniques to help designers understand the basic features of common impairments to avoid the showstoppers being developed in the first place.

Do different assistive technologies yield different results?
Different assistive technologies (AT) can yield different results. In order for the AT to present useful information to the user, the AT must extract the meaningful information provided by the application. For example, most Windows applications use the
Microsoft? Active Accessibility? (MSAA) technology to provide this interface between the AT and the application. The MSAA technology provides a programming interface to allow applications to communicate directly with an AT, i.e. MSAA enabled applications act as servers and ATs as clients of the MSAA. Because ATs developed over time as Windows and Windows applications developed, most ATs use a combination of MSAA and their own off screen model to determine the user interface presented by a software application. So not all AT products will extract or convey the same information to the user. Each AT makes their best guess based on the information provided by the application. So how do application developers and session administrators ensure their products are accessible? The best practice is to evaluate your product's accessibility using the AT that most of your customers use. If one AT does not present the correct information to the user, then try another AT to ensure that there is not a defect in the AT. Tools such as Inspect Objects are very useful in determining if there are programming errors in the application that keep the information from being accessible. Your application may be technically accessible but not work correctly with an AT. In this case it is necessary to work directly with the AT vendor to ensure proper interoperability.