I really enjoy sitting down with the developer to put their site or application to the test of a real screen-reader user. This is extremely rewarding work because when developers see what problems a real person, has they begin to empathize and understand why accessibility is important. I find it critical that I test with the developers because it gives them a deeper understanding of what the problems are. However, lately I’m finding too many problems the developers can’t solve and it turns out the assistive technology I use is the actual problem.
First, I think it’s important to explain what one of my typical accessibility reviews is like. Typically my user-experience expert and I sit down with developers and other key people on the project. We always ask the project team to inform us who they think their users are and identify the key tasks they want people to perform on their project.
We start one of our reviews by introducing a few key concepts for accessibility, such as how a screen- reader works and a general understanding of what we would be looking for. Now the fun begins; we start by just reading the screen and letting everyone in the room hear what the blind person would hear the first time they visit the site or run the application.
I tend not to examine a project beforehand because I want the developers to witness my first encounter to a barrier in real time. This is not saying that I don’t pre-review some sites, but most of the time I like it to be as real and natural as possible. While we’re listening to the page, my user-experience (UX) person is already running through the source code and looking for any obvious problems within the code to point me toward. She often sets me up by saying, “Let’s try this” or “Can you find that?” No matter how many times she does it, I never know it’s a setup. Once we discuss what we’ve heard, we review whether there are any problems that we all caught. Then it’s time to move on. Now the real work begins.
The first quick run-through will reveal very simple, easy-to-fix problems such as semantic structure of the page being incorrect (headings out of order) or unlabeled graphics, or unlabeled form fields. Once we’ve gotten these barriers out of the way, we start to work through a development task list. I’m given a scenario that a typical user would have to perform on the site, and without assistance and instruction, try to complete that task and scenario, describing not only what I’m doing but any problems I encounter, as I encounter them, and potential ways to fix these problems.
It’s astounding how even a simple form can be a challenge if it’s not coded correctly for a screen-reader user. For example, if one form-field populates other form-fields, depending on the answer, can the screen-reader get to the new fields or does it even know they exist? These reviews can be as long or short as the project owners like and we repeat them as frequently as we can to avoid introducing any new accessibility barriers. Development teams appreciate these reviews because it gives them a real grasp of how a user with a disability interacts with their site. We found that for many of the problems revealed by these reviews, when they’re fixed, also generally make the site easier to use for everyone.
Lately I’ve been coming across problems that are just driving me crazy. Developers who are learning more about accessibility are using techniques such as ARIA (Accessible Rich Internet Applications) to make their sites more accessible. However, the assistive technology vendors are not keeping up the standards and are breaking the accessibility. For example, ARIA allows for form-fields to be marked “required” or “not required”. Perhaps you’re saying to yourself, but many forms indicate this by including a star or changing the color. This is not sufficient, because a large number of screen-reader users will not be able to notice a star because they don’t listen to everything with punctuation reading turned on.
Imagine reading this article and hearing all the periods and commas in parentheses I’ve used spoken aloud. It can get extraordinarily irritating. Most users of screen-readers only turn it on when they really have to. So using the star may not get conveyed to the screen-reader user. Color usage is also not indicated by a screen reader. As one example, by using ARIA, the developer is able to send the message that the name field on their form is required but the phone-number field is not.
My primary screen reader, JAWS for Windows, has done something that breaks this for me. I tend to turn my screen reader to advanced-expert-level verbosity. I don’t want to hear too much extraneous information that’s repeated over and over again when navigating the computer interface. In the beginner mode there is a great deal of this type of information. Such as “press space bar to activate a button” or “press enter key to enter text in an edit field.” Because I’ve been using screen readers since 1985, I really don’t need this information told to me all the time. Which is why the “expert” mode or “lowest verbosity level” exists. However JAWS-maker Freedom Scientific has chosen that the ARIA-required attribute is only spoken to a person in the beginner-verbosity level.
This is just one example of how Freedom Scientific’s product makes my job harder than it should be. There is a way that I can go in and configure whether or not this particular element gets spoken in advanced mode or not, but why should I have to? Freedom Scientific has made a choice for me that I don’t think they should have made. The assistive technology should not have the ability to randomly remove elements inserted by developers.
Something like ARIA is specifically used to give a screen-reader user information. Freedom Scientific should allow the screen-reader user to hear what the developer’s trying to tell them, no matter what verbosity level the user sets. Another example was when we were working on an expandable content region for information on the website. Freedom Scientific for some reason only gives me the information that the expand/collapse element has activated the first time I activated. Then if I wish to activate again to change the state, it says nothing. I’ve checked this particular element with other screen readers and found that they report correctly when the state changes every time on this element. These are both very complicated interactions for screen readers, but are becoming more and more common. Recently there has been a lot of discussion on several accessibility lists about how certain symbols and typographical codes do or do not get read by screen readers. Screen-reader technology needs to respect the standards and not support them inconsistently and arbitrarily. Screen-reader vendors cannot just decide how they want to implement standards.
When we consider that the web is primarily a source of information, we begin to understand that many different techniques are used to convey this information. A law student reading through casebooks needs to know if information has been redacted or not; individuals working on different types of documents may need to see typographical symbols. The standard screen-reader user has no way of getting these types of symbols or information. There is no way that a site owner can create their information and indicate that information has been deleted. For example, there are HTML markups to indicate strong and emphasized text, but the screen reader does not convey this information to the screen-reader user. If you are an advanced screen-reader user, you might know how to configure your screen reader to tell you this information, but most screen-reader users don’t even know it exists.
I have previously spoken about mathematics in this blog. So I won’t go very much into how important it is for a screen reader user to know that a dash is not a minus, or whether a number is raised to the power of X, or if it’s a subscript or superscript. Needless to say, there’s a lot of pressure currently on developers to create accessible content, but there is no development pressure on screen-reader manufacturers to improve the way they handle content.
One of the other big problems I have is that different screen readers may treat the exact same content differently. And focus on different parts of the standard to follow. For example, I was recently testing a form and couldn’t understand why JAWS was speaking the required text in the form when VoiceOver was not. It turns out the problem was that VoiceOver respected the HTML 5 element of required accessibility but not the ARIA element of the same. It’s frustrating enough that both of these standards exist, but screen readers should not pick and choose; they should support both of them. As an accessibility tester, I need to learn the most popular screen readers as well as the top browsers. A normal quality assurance person may be able to test in Safari, Firefox, Chrom, Internet Explorer and the most thorough also test in Opera. But I need to also add testing in all of these with VoiceOver, NVDA and JAWS.
Needless to say, this becomes very frustrating when a page is coded properly to standards and yet the same screen reader is treated differently in each browser. And different screen readers also treat things differently. When things like this happen, I tend to believe that standards don’t really exist and I might be fighting a losing battle.
In closing, I’d like to ask the screen-reader vendors to please respect the rules and create applications that work to the standards. I’m finding that more web developers do try to code accessibly, but you’re not helping. Don’t pick a standard to support and another standard to leave behind; think about the user and get more of us users to help. Screen readers should not ignore things in the source code of a webpage or an application for no discernable reason. And when the developer is trying to give information in an accessible way, the screen reader should convey what that information is.