Translate page with Google

Pulitzer Center Update March 24, 2025

Pulitzer Center Shows Films, Discusses Global Health and AI at CUGH Conference

Author:
Image
photo of screen at CUGH film festival showing event title
Image courtesy of the Consortium of Universities for Global Health. United States, 2025.

The Pulitzer Center convened a Communications Workshop panel—in partnership with Johns Hopkins Bloomberg School of Public Health and Global Health NOW—and a film festival at the 15th Annual Consortium of Universities for Global Health [CUGH] Conference in Atlanta, from February 20 to 23, 2025. Amid the backdrop of a rapidly changing political arena, global health experts and practitioners convened to discuss navigating today’s most pressing global health concerns, including access to care, research funding, health communications, and the possibilities and limitations of AI.

On the evening of Saturday, February 22, the film festival platformed examples of visual storytelling that capture the human stories behind statistical trends. The audience of global health students, researchers, and practitioners were challenged to explore how they can create accessibility around their work through creative communications techniques. Visual storytelling is one method for building accessibility for communities that are disproportionately impacted by health challenges and who often have the least access to reliable information, especially on scientific topics.

The films screened included "Burnt to Build": How Do Heatwaves Impact the Health of Workers? by Shagun Kapil and Joel Michael; Neglected and Exposed: Toxic Air Lingers in a Texas Latino Community, Revealing Failures in State’s Air Monitoring System by Alejandra Martinez; Why the American Abortion Debate Is Affecting Access in Kenya by Neha Wadekar; Decoding Deception: The Psychology of Combating Misinformation by Gene Russo; and Young Palestinians Face a Steep Toll on Mental Health by Kern Hendricks.

“It was eye-opening how pairing visuals with statistics can create a stronger impact than words and stats alone," an audience member shared.

Image
Audience members watch films at CUGH film festival
Image by Mikaela Schmitt. United States, 2025.

On Sunday, February 23, the Pulitzer Center, Johns Hopkins Bloomberg School of Public Health, and Global Health NOW hosted the annual Communications Workshop to discuss how to improve health literacy, with a special focus on the opportunities, risks, and ethical concerns surrounding the growing use of AI. Hilke Schellmann, a 2022 AI Accountability Fellow, joined Anant Madabhushi, executive director of the Emory Empathetic AI for Health Institute at Emory University, for the discussion, with an introduction by Dr. Peter H. Kilmarx, deputy director of the Fogarty International Center. The conversation was moderated by Dayna Kerecman Myers, managing editor of Johns Hopkins Bloomberg School of Public Health's Global Health NOW, and Mikaela Schmitt, Campus Consortium and Outreach program coordinator at the Pulitzer Center.

“There's a significant amount of anxiety about what AI is going to do. [At the same time], health care is embracing these technologies, because the more we embrace and understand them, the more we start to understand and appreciate their limitations, right?” Madabhushi said. “We need to communicate and talk about it, not pretend that it doesn’t exist … [to understand] where it is really going to help move the needle forward.”

As health care innovates with AI as a revolutionary tool for addressing global health inequities, there is great attention to the ethical and privacy risks that new technology entails. Schellmann, whose reporting centers on AI accountability, shared how she approaches new technologies to identify potential harm embedded in algorithms.  

“I think it's really helpful to be very skeptical at the beginning,” Schellmann recommended. “First of all, consider: Is this a problem I can solve with AI? … Why would we automate an already flawed process? Also, ask: How is the training data set conceived? Who's in the data set—is there any sort of diversity problem? How did you train the model—was it trained with third-party or ‘wild’ data?”

Schellmann continued by explaining an increasingly common phenomenon known as data overfitting. This occurs when a model learns the training data too well, resulting in strong initial results, but cannot make correct predictions on new data. When algorithms are not designed with broader applications in mind, nor trained on a range of data, they are likely to perpetuate biases or produce inapplicable results.

“A lot of the work that we're doing in the Emory Empathetic AI for Health Institute is really focused on developing algorithms that are intentionally empathetic—that are intentionally diverse and pluralistic—for two different reasons,” Madabhushi said. “One: We're developing algorithms that we want to make sure work not just in white patients, but work in Black and brown patients and across a priority of populations. We also wanted to develop algorithms that were going to leverage routinely available data. The question that we wanted to ask was: Could we develop frugal, opportunistic AI for the Global South?”

One frequent challenge of implementing AI systems in global health care is there is often not enough data in the Global South to train algorithms to the necessary extent. Training based on Global North data, though, may lead to issues such as overfitting. Researchers are experimenting with leveraging the currently available data to address opportunities across different systemic conditions.

“I think that in lieu of large, systematic data collections in the Global South, perhaps an alternative might be to think about more novel, creative ways in which we use AI in the context of the data that does exist: to potentially ask more of it, to ask other questions that we might not otherwise typically ask.”

Image
one person speaks at a podium and three people sit at a panel on global health and AI at CUGH Conference
Image courtesy of the Consortium of Universities for Global Health. United States, 2025

Panelists emphasized the importance of collaboration in each stage of work: from developers working with practitioners to understand the unmet clinical needs, to journalists partnering with academics to stress-test algorithms for efficacy.

“I think this interdisciplinary work is the only way forward: to come from different academic backgrounds and do this analysis together,” Schellmann said. “I often bring journalistic questions to researchers, and it's a win-win for all of us … [The researchers] got to create new knowledge they published, and I got to bring this [story to an outlet] because we now had much more authoritative journalism with data behind it.”

“You've got to be able to set up an ecosystem of trust; one where it's not a zero-sum game, where it's clear that everybody is going to benefit from this collaboration,” Madabhushi said. “If I’m an AI engineer working in the health space, I have the responsibility to not just be able to communicate what I’m doing from a technical standpoint, but also demonstrate appreciation for the clinical problems that we are trying to solve … This is something that is extremely important as we think about multi-disciplinary AI research, because we're not simply sitting in a basement and writing code. We're talking to our collaborators; we are co-creating; we are synergizing; we are ideating. The more you understand the [collaborator’s] perspective, the more the magic is going to happen … This concept of multilinguality is extremely important as we think about communication of what we're doing.”

The session then transitioned to discuss how to engage with media and broader health communication challenges. Audience members asked how to obtain data for accountability reporting and how to articulate complex scientific issues in layman’s terms. Schellmann, Myers, and Schmitt offered advice on how to identify moving stories within the data and how to craft story ideas into compelling pitches.

“[Journalists] also overcomplicate things all the time,” Schellmann said. “What do you tell your best friend? If you’re at a party, [how would you distill the topic?] That is probably the pitch. Don’t forget what brought you to the topic in the first place.”

Myers engaged the audience in an exercise to consider AI’s role in science communications. She asked ChatGPT to create a pitch based on a research paper, then compared it to a pitch written by a real person on the same paper, received by Global Health NOW.

When asked what was strong about the real pitch, an audience member shared, “It grabs the heart, while also being quite informative, and that also tells you who the person is as well.”

“Yeah, I can't imagine AI doing that,” Myers responded. “Even if AI can help us get through all the research, the density and complexity, to help us understand a paper or a topic better, it still needs that human element to grab you.”

Panelists encouraged a mindful approach to all AI technology: Be realistic about what it can, and importantly, cannot do, then harness the potential to build access with a careful, watchful eye.

Image
panelists and audience members talk after a panel
Image by Mikaela Schmitt. United States, 2025.

Resources for Science Communicators (recommended by Dr. Srikanth Mahankali):

  • Looppanel: An AI-powered research assistant that automates transcription, tags key themes, and generates affinity maps from your data quickly; particularly useful for UX research synthesis. 
  • FigJam: A collaborative online whiteboard tool that allows for remote affinity diagramming, helping researchers find themes in qualitative data efficiently.
  • Dovetail: A platform for capturing notes from research sessions, storing insights and tagging data. It is useful for organizing and analyzing qualitative research data. 
  • Miro: A versatile tool for brainstorming, documenting data, and presenting research; it can help group similar findings and identify key themes. 
  • Condens: A research repository that helps structure raw information from multiple sources by creating intuitive patterns.

RELATED INITIATIVES

global health reporting initiative

Initiative

Global Health Inequities

Global Health Inequities

RELATED TOPICS

navy halftone illustration of a female doctor with her arms crossed

Topic

Health Inequities

Health Inequities
navy halftone illustration of a vaccine and needle

Topic

Health Science

Health Science
navy halftone illustration of a man holding a lit candle

Topic

Mental Health

Mental Health
an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability