digital accessibility artificial intelligence

Building Digital Accessibility: AI Requires Human Oversight to Cut Down on Algorithmic Biases

The adoption of artificial intelligence (AI) in business operations has grown exponentially. Industries ranging from insurance to retail to health care rely on artificial intelligence to streamline manual and time-consuming tasks to boost efficiency and drive revenue. A McKinsey survey showed that over half of respondents reported AI implementation in at least one of their business functions. AI provides excellent value and will only grow in use over time, evolving to drive an ever-greater scope of our digital experience.

Fantastic! But wait.

When the training data and algorithms that power AI are biased toward the life experiences and perspectives of people with homogenous abilities, the needs of marginalized communities can too easily fall by the wayside. These communities include people who live with disabilities – people who are blind, deaf, or hard of hearing, have limited mobility, or are developmentally impaired. In our digital society, people with disabilities must be able to view, connect, track, read, and control content on web pages and apps. However, when technology is left to solve problems without human oversight, critical errors can go unchecked, and people cannot live their lives to the fullest extent.

Nearly a quarter of adults in the United States live with a disability. Ethically and financially, owners and operators of digital assets cannot afford to overlook those with disabilities – a significant population of consumers who may never interact with a product simply because of access hurdles. To best ensure an equitable digital experience for all, ADA compliance guidelines offer protections. Falling out of ADA compliance is painful and costly. Solving the complexities posed by website inaccessibility requires close attention to the consumer\’s experience while interacting with the digital asset. All that’s to the good. What’s the catch? Well, your shiny, new, expensive, machine learning system doesn’t elegantly handle use cases for people with disabilities – if at all. Your fix? Scrap it, be in violation of the law, or manually and expensively gather a bunch of new training data. None are good fixes.

Accessibility as an Afterthought

Let’s walk through an example for some context. Suppose your customer with a disability is asked to or opts to interact with virtual agents supporting your digital offering. The AI that trained your virtual assistance can recognize thousands of words based on how people typically interact and respond. That AI collects and builds training data to improve the user experience over time. People who speak or write in ways that align with those typical profiles will probably benefit from the AI assistance. People with unique cadences, slurs, stutters, or differences in pronunciation – in contrast – can quickly be trapped in a painful cycle of misunderstanding. You know how you’re constantly yelling the same thing at a mix of Siri, Assistant or Cortana? Imagine that’s the experience of a user with disabilities interacting with your support system. And it’s not a small number of people: 7.5 million people in the United States “have trouble using their voices” – the scenario is commonplace.

What To Do

The main objective of the ADA is to ensure the integration of people with disabilities into society. To the fullest extent possible, support for the equal access needs of people with disabilities should exist natively, as part of any experience. According to the ADA, you cannot provide equal access through a separate program, and such a solution is considered discrimination. Regardless of whether the alternative experience is lesser or “separate but equal,” this is not a reliable route for providing digital accessibility.

By far the most effective way to ensure that access is to have people with disabilities test AI driven systems to see if they work in the real world. At a basic level the most effective way to determine whether a system will work for people with disabilities is to have people with disabilities use the system. Doing so in a structured fashion broadly aligns with general best practices of usability testing. This includes two types of core testing:

· Functional testing – where a user is given an objective and their ability to complete it is measured

· Heuristic testing – where systems are tested against a set of known best practices for use by people with disabilities.

Results from this testing can clearly validate whether a system is usable by people with disabilities and provide the relevant records for ADA compliance needs.

What About Alternatives?

One common question: can we just provide an alternative route for people with disabilities? The answer is a rather unsatisfying “it depends.” Offering alternatives for users – in cases where the AI driven experience cannot understand the users’ needs – may be a valid substitute for the technology’s shortcomings. Issues arise, however, when the alternative does not create an equal experience. If your site redirects customers who the virtual agent cannot sufficiently understand to a dial-in or chat support agent does that alternative provide an equivalent experience as the digital agent? Typically, such human driven alternatives also come with longer wait times, resulting in a non-equivalent experience which is no longer a “valid” alternative. So, can you provide an alternative? It depends.

Is There a Quick Fix Here?

Similar to the question on alternatives we’re also often asked about quick, cheap fixes to problems of inaccessibility. Particularly, overlays purport to be “quick fix” solutions that meet accessibility requirements with a single line of code. Not surprisingly, one line of code does little to remedy underlying issues and often makes matters worse. Essential technologies used by people with disabilities, including third-party equipment like JAWS screen readers or built-in accessibility tools like Apple’s VoiceOver, are often simply incompatible with accessibility overlays.

Overlay solutions often make the site appear more accessible to people without disabilities than they really are, helping them operate under the legal radar, yet failing to meet the standards of the ADA. Meanwhile, their presence on your site acts as a beacon for compliance lawsuits, projecting a lack of due diligence and access on the part of an organization. Accessibility lawsuits result in millions of dollars lost for companies seeking to improve the experiences of consumers with disabilities.

Taking Accountability for Your Accessibility

All this is not to say AI impedes digital accessibility. On the contrary, AI has the potential to radically improve digital accessibility. It’s use, however, must be thoughtfully mediated and coupled with thorough human oversight. When AI operates alone, it allows accessibility biases to go unchecked and reinforce themselves, making digital experiences more difficult for people with disabilities to navigate. With disability presenting itself in different ways for different people, it is incredibly challenging for algorithms to operate without discriminating against somebody. With input and oversight from people, better yet from people with disabilities, your digital offering does not have to perpetuate the problems we are all so desperate to leave in the past.

Scroll to Top