Whether we see it or not, our governments, doctors, schools, and employers have used AI and other predictive technologies for years to make policy decisions, understand disease, teach our children, or monitor our work. The hype around generative AI is now supercharging the spread of these tools. As journalists, the onus is on us to reveal, interrogate, and explain this technology's rapid and widespread integration. Where is AI being used? Where is it working or breaking? Who is being harmed, and who stands to profit? How can our audiences make sense of what it all means for them?

The AI Spotlight Series is designed to equip reporters and editors—whether on the tech beat or any other—with the knowledge and skills to cover and shape coverage of AI and its profound influence on society. Our instructors include some of the world’s leading tech reporters and editors tracking AI and data-driven technologies for years.

Logo: AI Spotlight Series


The program is divided into three tracks: one for reporters on any desk, one for reporters focused on covering AI or deepening their knowledge of AI reporting, and one for editors (on any desk) commissioning stories and thinking strategically about your team’s overall coverage.

Each course is designed to give you a strong grounding in what AI is and how it works as well as the tools to identify critical stories—from spot news to deep investigations—that will highlight the technology’s impacts, hold companies and governments accountable, and drive policy and community change, while avoiding both hype and unnecessary alarmism.

At the program's end, you can pitch the Pulitzer Center for a grant or fellowship to support an AI accountability reporting project. You will also join the Center’s broader AI Accountability Network, a global consortium of journalists investigating and documenting AI's impacts on people and communities.

The program will prioritize journalists from the Global South and from communities underrepresented in media. Most of the instruction will be interactive and online, with some in-person opportunities at journalism conferences around the world. All courses are free of charge.

The AI Spotlight Series is funded with the support of the John D. and Catherine T. MacArthur Foundation, Notre Dame-IBM Technology Ethics Lab, Ford Foundation, and individual donors and foundations who support our work more broadly. Questions? Reach out to [email protected].

Track #1: Introduction to Reporting on AI<br />
Format: 90-minute, virtual webinar-style training</p>
<p>This track is designed for reporters interested in getting started but with minimal or no knowledge of AI. We will dissect what makes a good AI accountability story, from quick turnaround stories to more ambitious investigations, and dig deeper into a few examples.</p>
<p>Track #2: Reporting on AI Intensive<br />
Format: 3 x 2-hour virtual interactive sessions<br />
Capacity: 25-30 journalists per session</p>
<p>This track is designed for reporters who grasp AI, spend a significant amount of their time covering technology, and want to go deeper. It will help you clarify your understanding of technical concepts and think more expansively about how to cover the different facets of this fast-moving story.</p>
<p>Track #3: An Editor's Guide to AI<br />
Format: 90-minute virtual interactive training<br />
Capacity: 25-30 journalists per session</p>
<p>This track is designed for managing editors, executive editors, desk editors, and social media editors—anyone in charge of directing coverage, commissioning stories, or packaging and producing them for public consumption. We will identify different types of AI stories and dissect what distinguishes the best coverage, including its framing, headline, and artwork.

headshot of Karen Hao

Lead Designer, AI Spotlight Series
Contributing Writer, The Atlantic

Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society and a contributing writer at The Atlantic. She was formerly a foreign correspondent covering China’s technology industry for the Wall Street Journal and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments.

She has received numerous accolades for her coverage, including an ASME Next Award for Journalists Under 30. In 2019, her weekly newsletter, The Algorithm, was nominated for The Webby Awards. In 2020, she won a Front Page Award for co-producing the podcast In Machines We Trust. Prior to Tech Review, she was a tech reporter and data scientist at Quartz. She received her bachelor's in mechanical engineering and minored in energy studies at MIT.

headshot of Lam Thuy Vo

Co-designer, AI Spotlight Series
Reporter, The Markup

Lam Thuy Vo is a journalist who marries data analysis with on-the-ground reporting to examine how systems and policies affect individuals. She is a reporter with The Markup and an associate professor of data journalism at the City University of New York’s Craig Newmark Graduate School of Journalism. Previously, she was a journalist at BuzzFeed News, The Wall Street Journal, Al Jazeera America, and NPR's Planet Money. 

She has also worked as an educator, scholar, and public speaker for a decade, developing newsroom-wide training programs for institutions including Al Jazeera America and The Wall Street Journal; workshops for journalists across the U.S. as well as from Asia, Latin America, and Europe; and semester-long courses for the Craig Newmark Graduate School of Journalism. She's brought her research about misinformation and the impact of algorithms on our political views to Harvard, Georgetown, MIT, Columbia,  Data & Society, and other institutions. In 2019, she published a book about her empirical approach to finding stories in data from the Internet for No Starch Press.

headshot of Gabriel Geiger

Co-designer, AI Spotlight Series
Investigative Reporter, Lighthouse Reports

Gabriel Geiger is an Amsterdam-based investigative journalist specializing in surveillance and algorithmic accountability reporting. His work often grapples with issues of inequality from a global lens.

He is currently a retainer at Lighthouse Reports and was previously a weekly contributor for VICE’s Motherboard. His reporting can be found in VICEThe Guardian, openDemocracy, and the New Internationalist.

headshot of Gideon Lichfield

Co-designer, AI Spotlight Series

Gideon Lichfield began his career as a science and technology writer at The Economist. He then took foreign postings in Mexico City, Moscow, and Jerusalem before moving to New York City in 2009. He was one of the founding editors at Quartz, then editor-in-chief of MIT Technology Review and, most recently, of WIRED. While at MIT, he also edited post-pandemic speculative fiction for MIT Press. He splits his time between New York and the San Francisco Bay Area.

headshot of Tim Simonite

Co-designer, AI Spotlight Series
Senior Editor, WIRED

Tom Simonite edits technology coverage for The Washington Post from San Francisco. He was previously a senior editor at WIRED and spent six years reporting on artificial intelligence. He has also written and edited technology coverage while on staff at MIT Technology Review and New Scientist magazine in London.


This is the list of upcoming online and in-person AI Spotlight Series' webinars and workshops. Please check this space often as more events might be added throughout the year. Registration links for the various events will be included closer to the time of the workshops.


July 30 - August 1, 2024: Reporting on AI Intensive

This training will be virtual, held in English, and timed for journalists in North America, South America, Africa, and Europe time zones.


September 30, 2024: Introduction to Reporting on AI

This training will be virtual, held in Spanish & English, and timed for journalists in Africa and Europe time zones.




How We Did It: Unlocking Europe's Welfare Fraud Algorithms

Lighthouse Reports and WIRED teamed up to examine the growing use and deployment of algorithmic risk assessments in European welfare systems across four axes: people, technology, politics, and business. This methodology explains how they developed a hypothesis and used public records laws to obtain the technical materials necessary to test it.

illustration of AI facial tracking


How We Investigated Mass Surveillance in Argentina

Seventy-five percent of the Argentine capital area is under video surveillance, which the government proudly advertises on billboards. But the facial recognition system, part of the city's sprawling surveillance infrastructure, is being criticized after at least 140 other database errors led to police checks or arrests after the system went live in 2019. From the beginning of the investigation, we considered the question of privacy versus security, as well as the regulation of AI and already known racist patterns in facial recognition with the help of AI.


How I Investigated the Impact of Facial Recognition on Uber Drivers in India

As part of the investigation, Varsha Bansal conducted a survey of 150 Uber drivers across different parts of India to find out how many of them had been locked out of their accounts—either temporarily or permanently—due to issues related to facial recognition. This investigative effort prompted the gig workers' union to start collecting their own data to petition the platforms.



The AI Accountability Network

The Al Accountability Fellowships seek to support journalists working on in-depth AI accountability stories that examine governments' and corporations’ uses of predictive and surveillance technologies to guide decisions in policing, medicine, social welfare, the criminal justice system, hiring, and more.

artificial intelligence accountability - issue


AI Reporting Grant

The Pulitzer Center is now accepting applications for its reporting initiative focused on AI technologies and their impact on society.


Each week, the Pulitzer Center newsletter features a selection of underreported international news stories supported by the Pulitzer Center.