This letter features reporting from “Researchers Say an AI Powered Transcription Tool Used in Hospitals Invents Things No One Ever Said” by Garance Burke and Hilke Schellmann
Dear Dr. Abeba Birhane,
In today’s world, artificial intelligence (AI) is one of the fastest growing and most profitable industries. According to Forbes, in 2025, an estimated quarter trillion dollars will be spent on the expansion, research, and development of AI.
But along with the widespread enthusiasm for this futuristic technology, there is also widespread panic over its possible implications. In their Pulitzer Center-supported article “Researchers Say an AI Powered Transcription Tool Used in Hospitals Invents Things No One Ever Said,” Garance Burke and Hilke Schellmann help highlight an example of AI’s dangers. They explain how hospitals around the world are using AI to record and write down interviews between patients and doctors. At first glance, this seems like a good idea to save doctors time and allow them more time to help patients, but its actual usage ranges from precarious to catastrophic. Hospitals have reported the transcribing tool making up things that were never said. Doctor-patient interviews are confidential, so the documents can’t be accessed by the public, but one non-medical interview was transcribed by artificial intelligence and the difference between the speaker’s words and AI’s transcript are shocking. Reportedly, the speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” AI changed their words to, “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.” This misinterpretation may seem superfluous, almost comical. But imagine the impact this could have on medical issues. Say a patient reports their condition to a doctor, with AI software transcribing it. If their words are misrepresented, the medical facility could get their condition wrong, give them the wrong prescription, and the patient could be put in danger.
This issue is not limited to one specific area as it can also be noticed in places like Texas, my home state. According to the article, over 30,000 medical workers and 40 health systems use a tool powered by OpenAI's Whisper transcription model to transcribe patient visits, despite OpenAI's warnings against using Whisper for ‘high-risk domains.’” I find it terrifying that the healthcare system that my family and I rely on, leans in turn, so heavily on unreliable AI tools. It is unimaginable that I could lose someone close to me due to something as seemingly small as a mistranscription. However, with this tool already so widely available and used, it is not impossible. People have already lost their family, friends, or lives due to this widely overlooked issue.
The only people who can truly solve this problem are the ones who design and edit artificial intelligence software. While companies like Microsoft and Whisper AI turn a blind eye to the problem right now, if enough pressure is put upon them to address this growing issue, they may well respond. Someone like you, with influence over the world of technology and artificial intelligence could very well help the situation. While it may not be an immediate solution, global awareness about this problem must be accomplished if any progress is to be made. AI technology errors in high-risk settings are a result of careless usage of a possibly valuable technology, and disregard for the humans it was first and foremost meant to help.
I am asking you to use your influence to steer humanity in the right direction regarding this controversial issue. This cannot continue to go unnoticed by medical industries, artificial intelligence companies, and people all across the world.
Thank you for your time.
Beatrice P.
Beatrice P. is a seventh grade student at The Girls' School in Austin, Texas.
Read more winning entries from the 2024 Local Letters for Global Change contest!