7. Recommendations

7.1. Methodological Recommendations

Future research should concentrate on increasing the validity and reliability of the research findings with consideration given to the following points:

  • Larger samples would increase the generalisability and confidence level of the findings.
  • A larger sample of users would allow for more comparisons to be drawn between automatic and manual evaluations. Not enough websites were evaluated by more than one user, leading to a limited range of truly useful data.
  • Most studies used the accessibility module of the IBM Rational Policy Tester (previously known as Bobby and WebXM); using a different tool affected the author's confidence in the comparability of the results.
  • Fujitsu Web Accessibility Inspector has been observed by the author to produce false negatives, therefore more than one evaluation program should be used to minimize the impact of false positives or false negatives from a single tool (Kane et al, 2007; pp.150).
  • An incidence of accessibility (DRC, 2004; p.23) would provide a more equitable representation of accessibility by comparing potential points of failure with actual points of failure.
  • The FSB claims to have in excess of 215,000 members (FSB, 2009c), but this is a paid membership and is unlikely to represent the 4.7 million UK SMEs faithfully. A more independent sampling frame is recommended.
  • The low response rate suggests that the questionnaire may have been too long. For example, one forum post requesting participants attracted 400 views, but only received a single response. Alternative methodologies such as a controlled study similar to that conducted by the DRC (2004; p.27) would facilitate more revealing comparisons to be made between user groups, such as the amount of time taken to complete a task.
  • Due to the sample selection process and technical implementation, it was difficult to determine how many people became aware of the questionnaire, or started to answer it, but did not complete it.
  • A self–administered questionnaire meant that participants had to assess live websites which were subject to change at any moment. This affects the reliability of the study, since the site submitted for automatic evaluation and the site seen by survey respondents may differ.
  • The perpetual evolution of the Internet necessitates the continual reassessment of website accessibility. There may be value in a longitudinal study (Saunders et al, 2003; p.96) to establish whether sites are being redesigned with the same (or different) accessibility problems.

7.2. Substantive Recommendations

Many reasons have been cited for the state of website accessibility, Krug (2006; p.173) concedes: it's a lot harder than it ought to be to make a site accessible, alluding to the complexity and indefinite nature of the WCAG. As the implementation of this research highlighted, even software developed by a leading technology company such as Google can produce not only invalid code, but a web form which is completely unusable by blind individuals.

Whilst much research has warned against over–reliance upon automated evaluation tools (Williams and Rattray, 2003; pp.715), the average SME website design fails to even reach this stage. If the websites in question were to address the four most–commonly violated checkpoints in Figure 1, then a significant proportion of potential accessibility barriers could be removed relatively easily.

Participant comments on user experience highlighted not only accessibility issues, but also much more general design problems, suggesting that user needs have not been fully considered. Krug emphasises the importance of user involvement in website design (Krug, 2006; p.134) and that this need not cost very much (Krug, 2006; p.137). In support, Nielsen suggests that 3 user testing sessions consisting of 5 users each time are able to spot 85% of usability problems. Nielsen goes on to champion the importance of alternating iterative testing and design stages (Nielsen, 2000).

Higgs remarks that 23% of SMEs think that website accessibility is expensive and/or a waste of time whilst 27% believe that their customers are able enough to cope (Higgs, 2006; Table 4.1). Higgs goes on to illustrate the lack of consensus over where legal responsibility for accessible design should lie (Higgs, 2006; Section 6.1). Confusion abounds surrounding both the legalities and practicalities of accessibility and the issue clearly needs concerted promotional efforts, not just from the W3C but also from the UK Government. Colleges and Universities also have a role to play in sensitising the web developers of the future to the needs of disabled users, so that accessible design becomes an integral part of interaction design, rather than just an afterthought.

Back to top