Journalist Resource July 5, 2023

Reporting On (and With) Artificial Intelligence

SECTIONS

The Pulitzer Center’s AI Accountability Network is dedicated to a radical transparency of methods and data in order to make reporting on and with AI more accessible. This is a space where journalists can explore the wide range of approaches used by our grantees and fellows that can serve as blueprints and inspiration for future reporting projects.

REPORTING ON AI

How do you hold AI technologies (and the humans behind it) accountable? Here you will find how AI Accountability Fellows and Pulitzer Center grantees used a variety of approaches—including data analysis, records requests, cross-border collaboration, and shoe-leather reporting—to delve into the real-world impact of AI on policing, social welfare, surveillance, and more.

INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How PCIJ Investigated Grab’s Surge Pricing Model

Working with 20 researchers, AI Accountability Fellow Karol Ilagan teamed up with data specialist Federico Acosta Rainis to spend more than six months examining ride-hailing app Grab’s algorithm and how it impacts consumers and drivers.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Investigated Welfare Algorithms in India (Part I)

AI Accountability Fellow Kumar Sambhav Shrivastava and journalist grantee Tapasya investigated opaque welfare algorithms in India that wrongfully cut off benefits to thousands of its poorest citizens. In this piece, Shrivastava shares how the story got started, the questions that drove the reporting, and the lessons they learned.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Investigated Welfare Algorithms in India (Part II)

AI Accountability Fellow Kumar Sambhav Shrivastava and journalist grantee Tapasya investigated opaque welfare algorithms in India that wrongfully cut off benefits to thousands of its poorest citizens. In this piece, Tapasya describes their approach to accessing public records through India’s Right to Information Act, as well as other reporting methods they used to overcome records’ denials and government bureaucracy. 


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Investigated Ring’s Crime Alert System for Police Departments

Across the country, more than 2,600 police and close to 600 fire departments have partnerships with Ring, the popular doorbell camera company that was acquired by Amazon in 2018. The Markup sought to get a better understanding of what kind of information is sent to police from Ring’s companion app and hyperlocal social platform Neighbors. This article describes our analyses’ data sources, methodologies, findings, and limitations.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How I Investigated the Impact of Facial Recognition on Uber Drivers in India

As part of the investigation, Varsha Bansal conducted a survey of 150 Uber drivers across different parts of India to find out how many of them had been locked out of their accounts—either temporarily or permanently—due to issues related to facial recognition. This investigative effort prompted the gig workers' union to start collecting their own data to petition the platforms.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Did It: Unlocking Europe's Welfare Fraud Algorithms

Lighthouse Reports and WIRED teamed up to examine the growing use and deployment of algorithmic risk assessments in European welfare systems across four axes: people, technology, politics, and business. This methodology explains how they developed a hypothesis and used public records laws to obtain the technical materials necessary to test it.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Investigated Automated and Predictive Technologies at Refugee Camps and Borders in Europe and the U.S.

Lydia Emmanouilidou worked with journalists and researchers to investigate EU-funded high-tech surveillance systems at Greek refugee camps and the Greek border, how they compare to technologies at the U.S.-Mexico border, and U.S.-Greek/European collaboration and lesson-learning on border technology initiatives.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Did It: Peering Into the Black Box

Social Sentinel said its AI technology could help schools prevent suicides and shootings. Our investigation found no evidence that any student lives were saved because of an alert from the service. Our project was a comprehensive examination of the use of social media surveillance software on college campuses. We hope more journalists—particularly student journalists—will continue to examine the impact of artificial intelligence.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

Tracked: How AP Investigated the Global Impacts of AI

Temperature-detecting cameras. Drones. Technology that police say can predict human feelings. As government agencies quietly deployed new surveillance and predictive tools to monitor their citizens in a time of pandemic and protests, the team at AP compared these tools’ application across nations, and probe how people’s personal data can be sold and mined to expand the knowledge infrastructure of governments and corporations.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

10 Takeaways From Journalists at the Forefront of AI Reporting

On February 24, 2022, Pulitzer Center grantees Karen Hao and Joanne Cavanaugh Simpson joined Reuters Executive Editor Gina Chua for a conversation on AI accountability. The speakers shared how they got started reporting on algorithms, discussed their challenges and breakthroughs, and offered tips for colleagues interested in covering this urgent, underreported story.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How To Run a Public Records Audit With a Team of Students

A Markup investigation found that Amazon Ring’s social platform, Neighbors, funnels suspicions from residents in whiter and wealthier areas of Los Angeles directly to the police. We teamed up with five students at the Craig Newmark Graduate School of Journalism at CUNY. Together, the students sent public records requests to 25 different police departments. This methodology gives tips for working with students on a public records audit.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Investigated Mass Surveillance in Argentina

Seventy-five percent of the Argentine capital area is under video surveillance, which the government proudly advertises on billboards. But the facial recognition system, part of the city's sprawling surveillance infrastructure, is being criticized after at least 140 other database errors led to police checks or arrests after the system went live in 2019. From the beginning of the investigation, we considered the question of privacy versus security, as well as the regulation of AI and already known racist patterns in facial recognition with the help of AI.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How Digital Witness Lab Analyzed Data From BJP WhatsApp Groups Ahead of the Indian Elections

A case study on how India’s Bharatiya Janata Party used WhatsApp to spread its message in a small Indian town. We studied messages from 20 WhatsApp groups in the weeks surrounding what many considered to be the beginning of Prime Minister Narendra Modi’s 2024 election campaign: the Ram Temple inauguration in Ayodhya in January 2024. This methodology provides further details regarding the statistical analyses presented in Rest Of World’s investigation “Inside the BJP’s WhatsApp Machine.”

MACHINE LEARNING IN INVESTIGATIONS

Explore how Pulitzer Center grantees have used machine learning to augment the reporters’ capacity to tackle big data and systemic issues. Find out more about how journalists revealed for the first time the scope of corporate-owned rental homes in North Carolina; calculated the scope of oil-well abandonment in Texas; held land banks accountable in Ohio; and mapped the proliferation of gold mines in the Amazon rainforest.

INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Did It: The Dark Side of Hydropower

Hydropower is a major pillar for sustainable electrical energy generation. The way it is managed, however, is everything but sustainable: Reservoirs and hydropower are a threat to millions, if no measures are taken. Earth observation satellite imagery and neural network-powered coastline detection help to unveil this dark side of hydropower.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

Investigating Rainforest Destruction: Finding Illegal Airstrips with the Help of Machine Learning

From Freedom of Information requests to using artificial intelligence to analyze satellite imagery, the reporters got their hands on previously unseen data that sheds light on the corruption and systems behind the destruction of the world’s biggest rainforests. Learn about their innovative methodologies.


INFORMATION & ARTIFICIAL INTELLIGENCE TOOLKIT

Single-Family Rental Industry Reporting Toolkit

Using machine learning, The Charlotte Observer and The News & Observer look into a new class of landlords in North Carolina's booming housing market that includes Wall Street hedge funds and other institutional investors. This toolkit is for local and national journalists at a variety of skill levels who are interested in probing the extent of corporate homeownership in their cities, regions, and states.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How They Did It: Uncovering a Vast Network of Illegal Mining in Venezuela

In the Venezuelan Amazon, traces of the devastation caused by illegal mining can be seen from the sky. This investigation arose from a survey of satellite monitoring information, later processed with artificial intelligence, to see and understand in a comprehensive way the evolution of the mining phenomenon in the Venezuelan Guayana, north of the Amazon.


INFORMATION & ARTIFICIAL INTELLIGENCE TOOLKIT

New AI Platform Monitors Mining in the Amazon Rainforest

Mining, one of the main causes of the degradation of rivers and forests in the Amazon, can now be monitored remotely by journalists, scientists, and other concerned citizens. The Pulitzer Center, in partnership with Earthrise Media, has launched the Amazon Mining Watch, a platform powered by an algorithm that analyzes satellite imagery to detect gold mines and other open-pit mining activities in the world's largest rainforest.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

We Used Machine Learning and Computer Vision to Unravel COVID’s Financial Burden on Georgians

In Georgia, a series of Atlanta Journal-Constitution analyses have shown that COVID contributed to hundreds of millions of dollars in increased public debt costs, that Black residents and poorer residents are disproportionately harmed by the bankruptcy system, and that despite all the financial damage that has already occurred, there is a coming wave of bankruptcy filings.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How do Public Officials Make Land Bank Decisions? Artificial Intelligence may Seek Patterns

Land banks are vital public agencies who play a key part in turning decrepit, abandoned properties back into viable homes before they attract pests and crime. Using machine learning methods, Eye on Ohio looked at property remediation in several counties to look deeper at a process that has transformed the rust belt over several years.


INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

How We Calculated the Size of the Southwest's Abandoned Oil Well Problem

The true scale of oil well abandonment is likely far greater than the official numbers. In this visually stunning and immersive project, Grist and the Texas Observer modeled oil wells that are likely to be abandoned in the coming years and chronicled the experiences of two Texas ranchers struggling to hold oil companies accountable for polluting their properties.

aerial photo of river in Peru

INFORMATION & ARTIFICIAL INTELLIGENCE METHODOLOGY

Así se construyó el algoritmo del Proyecto Dipteryx que analiza el riesgo de tráfico de madera

El Proyecto Dipteryx es una iniciativa periodística basada en el uso de un algoritmo que alerta sobre la sospecha de riesgo de ilegalidad en el comercio de madera amazónica.

WEBINARS

Catch up on our public events recordings on all things AI.

INFORMATION & ARTIFICIAL INTELLIGENCE RESOURCE

Holding AI Accountable: Who Gets To Tell the Story?

Algorithms also have the potential to disproportionately harm some of the most vulnerable members of society by deepening pre-existing social and economic gaps and amplifying racial bias. At the Pulitzer Center we believe this is not just a tech story but an accountability and equity one, too, that should be part of every reporter’s beat.


INFORMATION & ARTIFICIAL INTELLIGENCE RESOURCE

Champion Donors' Exclusive Event: Joanne Cavanaugh Simpson on AI Accountability

Pulitzer Center grantee Simpson offered her insights on the topic of artificial intelligence and machine learning (including police surveillance), the intersection of technology and society, and how to approach the tensions between the two in emerging AI technologies as a journalist.


INFORMATION & ARTIFICIAL INTELLIGENCE RESOURCE

FAQ: What You Need To Know To Join the AI Accountability Network

Featuring Pulitzer Center Executive Editor Marina Walker-Guevara and former AI Network Manager Boyoung Lim alongside our AI Fellows, this "ask me anything" webinar focused on tips for applying to the Pulitzer Center's AI Accountability Network.

FUNDING FOR JOURNALISTS

Are you inspired by the blueprints and toolkits from this page? Interested in reporting on or with AI yourself? Here are some opportunities for you to seek support.

INFORMATION & ARTIFICIAL INTELLIGENCE FELLOWSHIP

AI Accountability Fellowships

The Al Accountability Fellowships seek to support journalists working on in-depth AI accountability stories that examine governments' and corporations’ uses of predictive and surveillance technologies to guide decisions in policing, medicine, social welfare, the criminal justice system, hiring, and more.


INFORMATION & ARTIFICIAL INTELLIGENCE GRANT

Machine Learning Grants

The Pulitzer Center encourages proposals that use advanced data mining techniques, such as machine learning and natural language processing, to solve a data or reporting problem related to a journalistic investigation.


INFORMATION & ARTIFICIAL INTELLIGENCE GRANT

Data Journalism Grants

The Pulitzer Center is seeking compelling data-driven storytelling, based on original data collection and analysis and strong visuals, that has the potential to shape public discourse and hold the powerful accountable.

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
orange halftone illustration of three newspapers stacked on each other

Topic

Misinformation and Disinformation

Misinformation and Disinformation
technology and society

Topic

Technology and Society

Technology and Society