To what extent can websites owned by UK Small-to-Medium Enterprises be considered accessible?

5. Research and Analysis

5.1. Phase One Results

A total of 30 websites were evaluated using the methodology described in Section 3; the results overview is included in Appendix 6, with the full dataset available upon request to the author. Only one website passed every Priority 1 checkpoint and this was the only website to achieve AA–compliance by passing both Priority 1 and 2 checkpoints. One further website passed every Priority 2 checkpoint, and an additional two websites passed every Priority 3 checkpoint. Unfortunately, these three sites violated checkpoints in the other two categories and therefore are not potentially compliant.

Since Priority 1 violations represent the most critical accessibility barriers, the pass rate of just 3.3% in this study is significantly worse than the outcome of the DRC study, which had a Priority 1 pass rate of 19% (DRC, 2004; p.22) — albeit from a much larger sample size. Research by Williams and Rattray (2003) concurs with this study, finding an 82% failure rate at Priority 1 from a much more comparable sample size of 72 websites (Williams and Rattray, 2003; pp.713).

The author’s research compares particularly unfavourably with the work of Petrie et al, (2005; p.13) which found 42% of sampled sites with no automatically–detectable Priority 1 violations. However, it is important to consider that this figure was generated by a sample of 300 museum, library and archive sites where the there has been a greater emphasis on accessibility for much longer than in the commercial market (Petrie et al, 2005; p.1).

The author’s research found that 3.3% of the sample (1 site) met requirements for AA–compliance, compared with 0.6% of sites tested by the DRC. In concordance with the DRC study (DRC, 2004; p.23), no sites met AAA–compliance. When considering automated testing pass rates it must always be remembered that this is a provisional pass, pending manual evaluation.

As (Petrie et al, (2005; p.15) make clear, it is important to consider more than just failure rates when considering website accessibility. In addition, they suggest a Designer Measure and a User Measure. The Designer Measure indicates of the number of different checkpoint violations per page, which can also be considered the number of issues the designer needs to attend to. The User Measure represents the number of issues which may impede a user’s progress — in essence a measure of the number of instances of checkpoint violations per page which may produce a barrier to access.

The Designer Measure for each home page ranged from 1 through to 8 and produced a mean of approximately 5 violations, with both modal and median averages arriving at 6. This is slightly lower than the DRC study, which produced a mean of 8 violations per home page (DRC, 2004; p.23), but more closely aligned with the work of (Petrie et al (2005; p.16) which produced a mean of 6.

Within the entire sample, there were 12 different checkpoint categories violated (see Figure 1, checkpoint numbers are decoded in Appendix 7), with warnings issued for a further 9 categories. In total, the sample highlighted 21 types of potential accessibility issue from a total of 95 items which FWAI checks for Checkpoint 1.1. (provision of alternate text for non–textual elements) was violated by 97% of websites, meaning that most websites might be excluding visually–impaired users from certain content.

Figure 1: Checkpoint violations by type

Of all Checkpoint 1.1 violations, by far the most–common issue was failure to provide alternative text for images, an error echoed by Williams and Rattray (2003; pp.715). Interestingly however, one of the less common violations in this study (Checkpoint 3.4: present on 37% of sites) was exhibited in 99% of websites analysed in the Irish study conducted by (McMullin (2002; Pervasive Defects). Provision of alternative text came second in McMullin’s study, having been judged to be omitted by 91% of the sites audited.

The seven most–common checkpoints to be violated accounted for 93% of distinct checkpoints violated. Whilst only three of these were classified as Priority 1 (Checkpoints 1.1, 6.5 and 12.1), these checkpoints accounted for 35% of the total categories violated. These figures suggest a degree of commonality to the type of mistakes which the sampled websites are making.

The User Measure per home page (see Figure 2) ranged from 3 to 76 with a mean average of 23 and a median average of 17. The modal average was 6, but with only two occurrences this figure was not considered representative of the sample; a standard deviation of 18 confirms a reasonably broad distribution of results.

Figure 2: User measure per home page

In contrast to these findings, the DRC study found: …approximately 108 points per page where a disabled user might encounter a barrier to access (DRC, 2004; p.24).
However, the report fails to make clear what kind of measure of central tendency has been used to determine this number. Additionally, Petrie et al calculated an average (presumably mean) User Measure of 56.9 (Petrie et al, 2005; p.16) per home page. These significantly greater figures could be attributed to the tendency for the home pages sampled by both studies to be larger and more complex than those owned by the average SME.

The majority of home pages generated between 0 and 7 instances of checkpoint violations for each Priority (see Figure 3). However the outliers in this data are accounted for by mainly Priority 2 and 3 violations and these checkpoints are contravened in greater numbers. This can be partially accounted for by the fact that there are less Priority 1 checkpoints, but may also indicate that satisfying Priority 1 checkpoints comes more naturally to web developers, perhaps more by common–sense than by concern for disability issues.

Figure 3: Distribution of Checkpoint Violations

Figure 4 depicts the proportionality of the User Measure across all sites. There were a total of 135 Checkpoint 1.1 violations across the whole sample, with a median average of 3 violation instances per home page and only 1 site successfully avoiding failure in this category. At 303 instances, Checkpoint 2.2 accounted for 44% of the total User Measure, despite being violated by only 17 sites. This suggests that a few websites may have skewed the emphasis.

Figure 4: Checkpoint Violation Instances by Type

5.2. Phase Two Results

With the data collected over a period of five weeks, the user evaluation questionnaire received a total of 38 responses. An overview of the results is included in Appendix 8, and the full dataset is available on the disc. Six of the participants were exposed to the pilot survey which was a minor variant to the main research instrument Hewson et al, 2003; p.113) and yielded some comments about the questionnaire form itself. These comments have not been included in the analysis, but the rest of the data has been used.

5.2.1. Analysis of Participant Demographics

The age distribution of the sample is significantly skewed towards the 40–64 age group which also accounts for the large majority of male respondents (see Figure 5). Female participants accounted for 55% of the sample and whilst more evenly–distributed among age groupings, there is still an under–representation of younger respondents who form the dominant Internet user group.

Figure 5: Age/Sex Distribution

The respondents demonstrated a high degree of computer literacy, with 84% of the sample reporting 5 or more years of Internet experience and 89% claiming to use the Internet every day. Most respondents (92%) claimed to use the Internet for 3 or more of the purposes listed. Figure 6 shows that 95% of respondents surf the web, 92% use email, newsgroups or forums and 84% buy products or services online.

Figure 6: Internet Use by Category

With only 69% of the UK population using the Internet every day or almost every day ONS, 2008; p.5) such a large proportion of competent Internet users may represent a bias in the sampling. Alternatively, it may simply paint a picture of greater computer literacy among those who do use the Internet in 2009. In 2001 only half of the 60 participants in Nielsen and Pernice (2001; p.131) claimed to be using the Internet every day with 32% having 3 years of experience or less.

5.2.1.1. Disability

There were a total of 26 respondents who considered themselves to have a disability, accounting for 68% of the sample (see Figure 7). Although disproportionate to the wider population, this sample met the research requirements outlined in Section 3.3.2 whilst still providing an appropriate non–disabled control group.

Figure 7: Proportion of Disabled Respondents

The most common disabilities reported by participants were visual (38% of respondents) and auditory (46% of respondents), as illustrated by Figure 8. This reflects the bias in assistance the author received from organisations representing deaf and blind causes. Five disabled respondents reported more than one disability, meaning that there were a greater number of disabilities reported than disabled participants.

Figure 8: Disability by Category

The disabled participant group reported using the whole gamut of assistive technologies offered as potential responses (see Figure 9). The largest number (31%) used screen–reading software and a total of 36% of respondents claimed to use Braille interfaces, screen magnification and adapted colour schemes, with 12% using each adaptation respectively.

Figure 8: Assistive Technologies Used

In concordance with the general level of computer literacy outlined earlier, those using assistive technologies also demonstrated significant experience with their adaptations as Figure 10 shows.

Figure 10: Assistive Technologies Experience

Only 14% of assistive technology users claimed 1 year of experience or less with their equipment, with 40% expressing more than 5 years of experience. Within this user group it is therefore reasonable to assume that many of the problems experienced with the test websites will be related to either shortcomings of the assistive technology used, or to issues with the design of the website in question.

5.2.1.2. Sensitivity to Design Issues

Somewhat unsurprisingly, Figure 11 shows that the vast majority of both disabled and non–disabled respondents considered website ease of use to be either ‘quite’ or ‘extremely’ essential. Due to the sampling techniques used it is likely that many of the respondents were already sensitised to web design issues. This may be because of the problems they have encountered as experienced disabled web users, or by their position within an organisation which promotes disability causes.

Figure 11: Sensitivity to Website Ease of Use

5.2.2. Analysis of Participant Website Evaluations

Only two respondents failed Task 1 by being unable to give a description of the product or service offered by the website owner. A single participant failed to find a contact telephone number, and a further participant only found an email address. Despite not being requested, email is an acceptable method of communication so therefore just one participant can be considered to have failed Task 2.

No participant failed more than one task, but the three who failed either task were all disabled, with two reporting an auditory condition and the other reporting both visual and auditory conditions. This gives a success rate of 91.4% for disabled users which is higher than the success rate identified by Petrie et al of 75.6% across all disabled user groups (Petrie et al, 2005; p20). The survey techniques used in this study were based upon the work of Petrie et al (2005; p.11), although it could be argued that the tasks set in the precedent study were slightly harder than those used in this study.

From analysis of Figure 12, it is difficult to determine a strong correlation between disability and website user experience; however it can be seen that the vast majority of non–disabled users found the task easy, whilst only a small proportion found it at all difficult.

Figure 12: Experience of Task 1 versus Disability

Irrespectively, the majority of disabled users didn’t find the task hard either, and bivariate analysis of Task 2 (see Figure 13) supports this observation.

Figure 13: Experience of Task 2 versus Disability

The following comparisons use the scale ranging from 1 meaning ‘extremely easy’ to 7 meaning ‘extremely hard’. The median rating for Task 1 was 2.5 from both the blind and deaf respondent groups, whilst non–disabled users gave a median rating of 2. Task 2 had a median rating of 3 from the blind user group and 2 from the deaf user group, with a median rating of 1 from the non–disabled group. There were not enough respondents from other disabled user groups to make further comparisons fruitful, but overall these results show that these disabled respondents found the tasks only slightly harder than their non–disabled counterparts.

5.2.1.1. Automated Analysis Versus User Evaluation

It is useful to compare the websites where users had the most and least problems with the results of the first research phase. Not every home page evaluated in Phase One was user–tested, and of the 18 sites which were tested, only 10 were evaluated by more than one participant. It is methodologically–suspect to draw conclusions about a website based upon a single user impression and so the author will make it clear when this is happening.

Interestingly, the home page with the best User Measure overall (despite a single failure for each Priority) faired quite badly in user evaluations. Having been rated by three participants the site scored a mean of 6.3 (‘quite unsure’) for user confidence and 6 (‘quite poor’) for considering user needs, perhaps indicating the importance of that single Priority 1 violation. The home page with the poorest User Measure was not evaluated but the page with the second–poorest User Measure was evaluated by one participant who reported both visual and auditory disabilities. This respondent rated the site ‘7’ (‘extremely hard’) for both Task 1 and navigation ease, gave a ‘6’ (‘quite hard’) for Task 2 and a rating of ‘7’ (‘extremely unsure’) for user confidence.

The only site to hold potential AA–compliance was evaluated by three respondents, all of whom reported a disability. With a mean of 3 (‘slightly easy’) for Task 1, and 1.3 (‘extremely easy’) for Task 2, a navigation ease of 2.6 (‘slightly easy’) and a rating of 2.3 (‘quite well’) for considering user needs this website lends some credibility to the accuracy of automated tools.

Conversely, what can be tentatively considered the easiest site to use according to user ratings is remarkable in that it also presented nineteen Priority 1 violation instances the greatest number of all sites studied. This contradiction could possibly be explained by the fact that one of the four evaluators was not disabled, and the three others reported no visual or cognitive disabilities the two user groups who struggle most with low accessibility (DRC, 2004; p.26).

5.2.1.2. Qualitative Data

A brief analysis of the qualitative data produced a series of word diagrams (see Appendix 9). Looking at some of the language used corroborates the high level of sensitivity to design issues and suggests that several respondents had an acute awareness of accessibility issues. This means that participants may have been sensitised to actively spotting accessibility problems rather than simply encountering them naturally.