Whether we see it or not, our governments, doctors, schools, and employers have used AI and other predictive technologies for years to make policy decisions, understand disease, teach our children, or monitor our work. The hype around generative AI is now supercharging the spread of these tools. As journalists, the onus is on us to reveal, interrogate, and explain this technology's rapid and widespread integration. Where is AI being used? Where is it working or breaking? Who is being harmed, and who stands to profit? How can our audiences make sense of what it all means for them?
The AI Spotlight Series is designed to equip reporters and editors—whether on the tech beat or any other—with the knowledge and skills to cover and shape coverage of AI and its profound influence on society. Our instructors include some of the world’s leading tech reporters and editors tracking AI and data-driven technologies for years.
TRACKS | COACHES | UPCOMING WORKSHOPS | RESOURCES
The program is divided into three tracks: one for reporters on any desk, one for reporters focused on covering AI or deepening their knowledge of AI reporting, and one for editors (on any desk) commissioning stories and thinking strategically about your team’s overall coverage.
Each course is designed to give you a strong grounding in what AI is and how it works as well as the tools to identify critical stories—from spot news to deep investigations—that will highlight the technology’s impacts, hold companies and governments accountable, and drive policy and community change, while avoiding both hype and unnecessary alarmism.
At the program's end, you can pitch the Pulitzer Center for a grant or fellowship to support an AI accountability reporting project. You will also join the Center’s broader AI Accountability Network, a global consortium of journalists investigating and documenting AI's impacts on people and communities.
The program will prioritize journalists from the Global South and from communities underrepresented in media. Most of the instruction will be interactive and online, with some in-person opportunities at journalism conferences around the world. All courses are free of charge.
The AI Spotlight Series is funded with the support of the John D. and Catherine T. MacArthur Foundation, Notre Dame-IBM Technology Ethics Lab, Ford Foundation, and individual donors and foundations who support our work more broadly. Questions? Reach out to [email protected].
KAREN HAO
Lead Designer, AI Spotlight Series
Contributing Writer, The Atlantic
Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society and a contributing writer at The Atlantic. She was formerly a foreign correspondent covering China’s technology industry for the Wall Street Journal and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments.
She has received numerous accolades for her coverage, including an ASME Next Award for Journalists Under 30. In 2019, her weekly newsletter, The Algorithm, was nominated for The Webby Awards. In 2020, she won a Front Page Award for co-producing the podcast In Machines We Trust. Prior to Tech Review, she was a tech reporter and data scientist at Quartz. She received her bachelor's in mechanical engineering and minored in energy studies at MIT.
LAM THUY VO
Co-designer, AI Spotlight Series
Reporter, The Markup
Lam Thuy Vo is a journalist who marries data analysis with on-the-ground reporting to examine how systems and policies affect individuals. She is a reporter with The Markup and an associate professor of data journalism at the City University of New York’s Craig Newmark Graduate School of Journalism. Previously, she was a journalist at BuzzFeed News, The Wall Street Journal, Al Jazeera America, and NPR's Planet Money.
She has also worked as an educator, scholar, and public speaker for a decade, developing newsroom-wide training programs for institutions including Al Jazeera America and The Wall Street Journal; workshops for journalists across the U.S. as well as from Asia, Latin America, and Europe; and semester-long courses for the Craig Newmark Graduate School of Journalism. She's brought her research about misinformation and the impact of algorithms on our political views to Harvard, Georgetown, MIT, Columbia, Data & Society, and other institutions. In 2019, she published a book about her empirical approach to finding stories in data from the Internet for No Starch Press.
GABRIEL SEAN GEIGER
Co-designer, AI Spotlight Series
Investigative Reporter, Lighthouse Reports
Gabriel Geiger is an Amsterdam-based investigative journalist specializing in surveillance and algorithmic accountability reporting. His work often grapples with issues of inequality from a global lens.
He is currently a retainer at Lighthouse Reports and was previously a weekly contributor for VICE’s Motherboard. His reporting can be found in VICE, The Guardian, openDemocracy, and the New Internationalist.
GIDEON LICHFIELD
Co-designer, AI Spotlight Series
Editor
Gideon Lichfield began his career as a science and technology writer at The Economist. He then took foreign postings in Mexico City, Moscow, and Jerusalem before moving to New York City in 2009. He was one of the founding editors at Quartz, then editor-in-chief of MIT Technology Review and, most recently, of WIRED. While at MIT, he also edited post-pandemic speculative fiction for MIT Press. He splits his time between New York and the San Francisco Bay Area.
TOM SIMONITE
Co-designer, AI Spotlight Series
Senior Editor, WIRED
Tom Simonite edits technology coverage for The Washington Post from San Francisco. He was previously a senior editor at WIRED and spent six years reporting on artificial intelligence. He has also written and edited technology coverage while on staff at MIT Technology Review and New Scientist magazine in London.
BOYOUNG LIM
Senior Editor and AI Network Manager, Pulitzer Center
Boyoung Lim is a senior editor at the Pulitzer Center. She formerly worked as a reporter at the Korea Center for Investigative Journalism (KCIJ) - Newstapa. She is a member of the International Consortium of Investigative Journalists (ICIJ).
Before becoming a journalist, she worked as a police officer with a focus on cybercrime. She graduated from the Korean National Police University majoring in criminal investigation and holds a master's degree in international studies from Seoul National University.
MARIA KARIENOVA
AI Engagement Manager, Pulitzer Center
In the past seven years, Maria has actively engaged in development work, specialising in civil society and community engagement, public outreach, and inclusivity strategies. She has led EngageMedia’s digital rights initiatives in Indonesia, covering issues such as hate speech, freedom of religious beliefs, and emerging technologies. Her most recent work is driving civil society participation in Artificial Intelligence Governance in Indonesia.
Maria holds a postgraduate degree in Development Studies from SOAS University of London. Aside from international development, Maria is also concerned about the discourse on decolonisation and the politics of feminism in the rest of the world.
KATHERINE JOSSI
Editorial and Communications Assistant, Pulitzer Center
Katherine Jossi is currently an Editorial and Communications Assistant at the Pulitzer Center. She was previously an Outreach and Communications Intern at the Pulitzer Center from 2021-2022. She is a recent graduate of Beloit College with a bachelor's in history and political science.
She was a photo and graphics editor at her college newspaper, The Round Table, and previously interned with Rewire out of Twin Cities Public Television.
MARINA WALKER GUEVARA
Executive Editor, Pulitzer Center
Before joining the Center, Walker Guevara was deputy director of the International Consortium of Investigative Journalists (ICIJ). She managed two of the largest collaborations of reporters in journalism history: The Panama Papers and the Paradise Papers, which involved hundreds of journalists using technology to unravel stories of public interest from terabytes of leaked financial data.
Walker Guevara was instrumental in developing the model of large-scale media collaboration, persuading reporters who used to compete with one another instead to work together, share resources and amplify their reach and impact.
Course description
This track is designed for reporters with minimal or no knowledge of AI who are interested in getting started. Perhaps you are on the education beat, keen to dive into the way AI is entering the classroom; perhaps you are on the breaking news desk, increasingly being asked to write about the latest AI claims from Elon Musk. We will begin with the basics, covering the history of AI, how the technology works, and key technical concepts such as “neural networks” and “deep learning.” We will also dissect what makes a good AI accountability story, from quick turnaround stories to more ambitious investigations, and dig deeper into a few examples. At the end of the course, those who are interested in learning more are encouraged to register for the AI reporting intensive.
Learning outcomes
Through this course, participants will learn:
- background knowledge on the history of AI to understand its latest developments
- a clearer understanding of how AI works and how to better cover the multiple parts that make up its supply chain
- how to resist AI hype, and how to identify and cover the most important dangers, failures, and real world impacts of AI
- how to use their existing arsenal of tools to cover AI from every angle
Course description
This track is designed for reporters who have a grasp of AI, spend a significant amount of their time covering technology, and want to go deeper. It will be an opportunity to clarify your understanding of technical concepts and think more expansively about how to cover the different facets of this fast-moving story. The course will require a dedicated time commitment: We will meet for a total of 6 hours in one week; there will be an additional hour of recommended homework between each session to get the most out of class time.
During the first session, we will cover the history of AI, the AI supply chain, and key technical concepts such as how to train a deep-learning model and the difference between supervised and unsupervised learning. We will also dive into basic data literacy skills, such as for investigating AI bias. On the second day, we will dig into what makes a good accountability story and how to report on governments and communities, including by documenting harms and embedding with affected populations. On the third day, we will dive deeper into more technical concepts related to generative AI (think: transformers, diffusion models, scaling laws), and how to report on companies, including by cultivating inside sources.
Finally, we will conclude with a pitch workshop for anyone interested in getting real-time feedback on an AI accountability reporting project.
Learning outcomes
Through this course, participants will learn:
- background knowledge on the history of AI to understand its latest developments
- a clearer understanding of how AI works and how to better cover the multiple parts that make up its supply chain
- how to resist AI hype, and how to identify and cover the most important dangers, failures, and real world impacts of AI
- practical reporting methods to report on AI, including basic spreadsheets, public records requests, and strategies on approaching sources working at tech companies
- techniques for investigating bias in automated systems and tracking misinformation
- how to bulletproof your reporting with the help of experts and fact checkers
- how to formulate clear pitches around AI, including for accountability stories
Course description
This track is designed for managing editors, executive editors, desk editors, social media editors—anyone in charge of directing coverage, commissioning stories, or packaging and producing them for public consumption. We will identify different types of AI stories and dissect what sets apart the best coverage, including its framing, headline, and artwork. You’ll learn how to assess both pitches and filed stories, and avoid common pitfalls that can mislead or confuse an audience (or an editor). There will be opportunities to ask questions, trade tips, and have a lively discussion among fellow newsroom decision-makers.
Learning outcomes
Through this course, participants will learn:
- background knowledge on the history and latest developments in AI that can help them identify, plan, and execute AI coverage and projects in the newsroom
- the different categories of AI stories and how to shape them
- how to resist AI hype, and how to identify and cover the most important dangers, failures, and real world impacts of AI
- how to spot and avoid AI cliches and jargon
- tips and tricks on how to assess and question AI pitches to make for better stories
- practical tools and guidelines that can help them and their reporters better navigate coverage of AI and its impacts.
UPCOMING EVENTS & WORKSHOPS
This is the list of upcoming online and in-person AI Spotlight Series' webinars and workshops. Please check this space often as more events might be added throughout the year. Registration links for the various events will be included closer to the time of the workshops.
AI SPOTLIGHT SERIES
September 16, 2024: Reporting on AI Intensive
This training will be virtual, held in English, and timed for journalists in Middle East, Asia, and Pacific time zones. What time is that in my city?
AI SPOTLIGHT SERIES
September 30, 2024: Introduction to Reporting on AI
This training will be virtual, held in Spanish & English, and timed for journalists in Africa and Europe time zones. What time is that in my city?
AI REPORTING RESOURCES
INFORMATION & AI METHODOLOGY
How We Did It: Unlocking Europe's Welfare Fraud Algorithms
Lighthouse Reports and WIRED teamed up to examine the growing use and deployment of algorithmic risk assessments in European welfare systems across four axes: people, technology, politics, and business. This methodology explains how they developed a hypothesis and used public records laws to obtain the technical materials necessary to test it.
INFORMATION & AI METHODOLOGY
How We Investigated Mass Surveillance in Argentina
Seventy-five percent of the Argentine capital area is under video surveillance, which the government proudly advertises on billboards. But the facial recognition system, part of the city's sprawling surveillance infrastructure, is being criticized after at least 140 other database errors led to police checks or arrests after the system went live in 2019. From the beginning of the investigation, we considered the question of privacy versus security, as well as the regulation of AI and already known racist patterns in facial recognition with the help of AI.
INFORMATION & AI METHODOLOGY
How I Investigated the Impact of Facial Recognition on Uber Drivers in India
As part of the investigation, Varsha Bansal conducted a survey of 150 Uber drivers across different parts of India to find out how many of them had been locked out of their accounts—either temporarily or permanently—due to issues related to facial recognition. This investigative effort prompted the gig workers' union to start collecting their own data to petition the platforms.
MORE AI FROM THE PULITZER CENTER
INFORMATION & ARTIFICIAL INTELLIGENCE FELLOWSHIP
The AI Accountability Network
The Al Accountability Fellowships seek to support journalists working on in-depth AI accountability stories that examine governments' and corporations’ uses of predictive and surveillance technologies to guide decisions in policing, medicine, social welfare, the criminal justice system, hiring, and more.
INFORMATION & ARTIFICIAL INTELLIGENCE GRANT
AI Reporting Grant
The Pulitzer Center is now accepting applications for its reporting initiative focused on AI technologies and their impact on society.
FOCUS AREA
Information and Artificial Intelligence
The Pulitzer Center supports reporting on the latest technologies, including how algorithms work, who benefits from these systems, and who is harmed.
READ MORE INFORMATION & ARTIFICIAL INTELLIGENCE STORIES
Each week, the Pulitzer Center newsletter features a selection of underreported international news stories supported by the Pulitzer Center.