“Let us get back to you on um, specifically, uh, people with, um, you know, um, with limited functionality.”
I was interviewing a senior director at a major AI data annotation company who had spoken publicly about responsible AI and data bias. His previous work piqued my interest, but during our interview, he was clearly uncomfortable speaking about disability and how disabled people were included in his work. At this point, halfway into my AI Accountability Fellowship, I wasn’t terribly surprised.
Throughout my reporting, I had found that many companies—including those responsible for inclusive AI—were uncomfortable speaking about disability, and disabled people weren’t necessarily involved in any of their work, even when the end product might affect, or be directly marketed towards, disabled people.
In many ways, this didn’t surprise me much before the past year either. Despite benefitting from disability protections while growing up in the U.S. public school system as a student with severe asthma and food and environmental allergies, I didn’t read much about disability or see disabled people in the media.
Over the last year, I’ve aimed to elevate disabled voices in the area of AI in the mainstream. This has become increasingly important as AI trains itself on public and published information, whether through formal partnership or in less formal ways. If disabled perspectives aren’t included in our mainstream publications, their perspectives will continue to be excluded as the general public continues to adopt AI in more areas of everyday life.
This has an effect on the general public. Disabled people have often been early adopters of new technologies, and artificial intelligence has been no exception. As with many other technologies in the past, disabled people have been sometimes likened to canaries in a coal mine—early adopters who raise issues before they become a problem for the entire population.
An AI-based product is the result of decisions made when framing the problem, choosing data to train a model, and designing the algorithm. AI accountability requires probing each of these steps where bias can creep in. Part of challenging AI starts with who is responsible and included in each of these steps—and seeing how it affects disabled people who use it, either by choice or by force.
Developing an AI product
How disability is defined and framed changes decisions in every subsequent step. Here are two examples:
Step | Example 1 | Example 2 |
1. Framing and scope | Disability is a medical problem that should be fixed. Individuals should undergo treatment. | Disability is defined by a mismatch between an individual and their environment. Society and policies need to adapt. |
2. Choosing data | Individual data is collected with an eye for converging disabled data with what is seen to be the norm. | Collect data on an individual, their circumstances, and their environment. |
3. Designing the algorithm | Refine an algorithm trained on non-disabled people. | Train an algorithm using data based on disabled communities. |
4. Evaluating outputs | Evaluate the model by looking at the convergence of disabled data with what is seen as the norm. | Work with disabled communities to create a framework. |
As a reporter with a computer science background, it was tempting to jump directly into interrogating the model with code. But after years of experience writing bad code based on poor planning, I also know that the value judgments going into designing these models is where it begins.
The main takeaway I found was that a lot of AI systems have been oversold and people often see it as a cost-cutting solution without actually speaking with disabled people to see if these solutions work. This has been the case for companies looking to comply with accessibility legislation using a tantalizingly simple “one line of code” solution, despite disabled people calling these solutions grossly inadequate. This has also been the case when employers have begun to heavily rely on AI for transcribing speech, rather than hiring human interpreters when deaf people have found transcriptions inadequate.
Inclusive sourcing
In both cases, it was important to me to capture a broad spectrum of voices. The disability community is incredibly varied, and experiences and feelings about AI can really differ, depending on when they became disabled, their level of disability, their community, and whether they even identify as disabled or not. Because of the nuance and variety of opinions, it was important to me to speak with many people and focus on people who could speak about their own experiences. I wanted to be specific about what aspects disabled people found challenging.
Over the course of the year, I created a spreadsheet that would track who I spoke to. I aimed to quote people who could speak about their own experiences of being disabled rather than quoting non-disabled people who spoke about disabled people. In addition to what I normally track in my reporting, such as names, contact information, a short description, dates, and interesting tidbits, I also tracked if people I spoke to had a disability and how they described it. This was particularly important so that I could give readers the context behind each person’s perspective.
column | description |
name | Name of the source |
description | Who the source is, any affiliations, how I found this source |
... | Any other fields useful to you |
declared_disabilities | True or False—this allows you to do a crude breakdown of your disability representation for any story in a pivot table (or similar) |
disability_description | Short description of how someone sees their disability, e.g. do they call it a disability when they speak, do they identify as deaf or Deaf, when did they become disabled |
preferred_contact_method | How the source is most comfortable communicating e.g. short conversations only, prefers speaking in the morning, uses video relay, uses Black ASL |
I also wanted to make sure to speak with a variety of disabled people for each story and look beyond the few people that companies would recommend. I began by speaking with people who I had known for years working in the accessible web space to get suggestions on where to find more people to speak to within the disability community who used AI tools. I also found that social media was a great place to find people who wouldn’t normally have the loudest voice in the room. Some had anonymous usernames to feel safer posting their perspectives online. Building trust with people, some of whom were unfamiliar with speaking with reporters or were wary of speaking to a mainstream publication, was a challenge, but essential to capture a well-rounded story.
In order to build trust, providing flexibility in our interviews was important. For example, some of the people I spoke to who had early-onset Parkinson’s disease preferred to speak in the morning or break up our interviews into several shorter conversations. For interviews where I needed an interpreter because I don’t know sign language, I always provided options for the interviewee—many who used ASL were happy with interpreters I got from Interpreter-Now or using the video relay service. For people who relied heavily on lip reading, I had to remind myself to keep my hands away from my face and make sure my face was framed well on video calls. Overall, having spent some time reading about disability theory and working in the accessible web space as a newsroom developer for years came in handy.
The same care went into choosing companies and projects to highlight. While it’s simple finding companies who market themselves as creating AI products for disabled people, it is much harder to find ones that actually include disabled people in the work. I found that asking a simple question of “how have you included disabled people in the development of your AI models” weeded out many who were just looking for a quick marketing hit. Relying on the disabled community to steer me towards inclusive companies was also crucial.
Presentation
When putting each story together, my goal was to bring the experience of disabled people to the general public without giving people false confidence that they could fully understand it. Many encounters with AI by disabled people are unfamiliar to those who don’t regularly use assistive technologies, such as screen readers, a device that reads out text and visual descriptions from a screen. In the first story, I captured audio as a screen reader user would hear to show the impact of having an AI-assisted widget on a website. I thought showing the experiences of disabled people using AI would go a long way in explaining to people why AI doesn’t always live up to its hype.
I also wanted to give people I spoke with a voice in the story. With the agreement of those I spoke to, I was able to show how different AI models interpret non-standard speech using the voices highlighted in the story and how they could be misunderstood, creating harm in a workplace. In the same piece, I also wanted to highlight various signed languages. Several people I interviewed over the past year spoke about “sign language gloves” which go viral every few years. Many people I spoke to mentioned it as an example of a product that hadn’t been created with a sign language user—sign language is much richer than just hand or finger movements—it involves facial expressions and has many dialects.
The most difficult part of covering AI and accessibility was finding a framing that would be accessible to the general public. While nearly none of what I wrote this year is new to those who are disabled and see the effects of AI in their everyday lives, it can be very new to people for whom AI has been designed by. I needed to find a way to shift their perspectives and see why this was and would be relevant to them.
Despite the increased prevalence of long COVID during the pandemic and the fact that most people will become disabled in their lifetime, disability is still not something we often see in mainstream media, and it is foreign to many readers. One of the ways I tried to make these topics more relevant was by looking at the most impactful and well-known companies. I also tried to explain how these issues impact more than just people who are disabled—a lack of representative speech samples being used in speech recognition means that not only can it be an issue for people with disabilities, it also excludes people with accents or anyone who is being recorded in a noisy room. Having editors who pushed me to really spell these out was incredibly helpful.
Lessons learned
Looking back, there are a few things I wish I started earlier. With the huge influx of investment into AI over the last 24 months, there have been a lot of changes in AI models and AI-driven products, which have also changed the way disabled people see AI.
In August 2023, Be My Eyes opened up beta testing for a new feature in partnership with OpenAI. Be My Eyes is an app with more than 700,000 blind and visually impaired users that allows them to call a sighted volunteer and ask them for help understanding visual information. This can be used for anything such as seeing non-tactile buttons on a coffee machine touch screen or helping navigate a poorly designed website. In 2023, during my fellowship, Be My Eyes partnered with OpenAI to bring AI into the app—instead of users calling sighted volunteers, they could use AI to analyze visual information. Because of Be My Eyes’ large user base and regular feedback cycles, many blind people started to see more value in AI. This was in contrast to more rudimentary uses of AI by social media companies like Meta to describe images, which could generate descriptions like “Might be a photo of three people.”
Programmatically capturing these changes would allow us to see the development of AI over time—and see whether it improves or regresses in its understanding of disability. This would also allow everyone to make more informed choices about which models are most inclusive.
The other challenge in covering AI and accessibility is that there are not many companies that create products with and for disabled people. Because there aren’t as many choices for disabled people, it can be hard to get their true opinions about how products can be improved. This also means that it can be harder to hold these companies accountable for issues like privacy and security concerns.
Finally, in my year with the Pulitzer Center, I feel like I’ve only touched the very tip of a very large iceberg. Eighty percent of disabled people live in the Global South and their perspectives and experiences of AI will be very different. All I can hope is that more people will see the value in covering disability and AI—two areas that are only becoming more ubiquitous in all of our lives.