The recent explosion of new artificial intelligence tools has lit up debates about whether these technologies can solve society’s thorniest problems or pose a grave threat to humanity. Last year our cross-border, cross-format investigative team at The Associated Press was prescient in the questions we set out to answer: Which consumers and communities were already most impacted by AI-powered technologies? As government agencies quietly deployed new surveillance and predictive tools to monitor their citizens in a time of pandemic and protests, our team revealed the consequences of these proprietary technologies now used in everything from policing to child welfare.
The results: real-world change, including an end to the use of one government algorithm, a civil rights investigation by the U.S. Justice Department, and action from the White House itself.
We were stuck at home during a life-threatening pandemic, and U.S. health agencies were seeking my private, personal data. Not just my address and insurance details, but potentially my biometric data, too.
And just as the government wanted to know more about me, I wanted to know why they needed my data to fuel new artificial intelligence-assisted tools deployed nearby.
By fall 2020, as the U.S. reeled from COVID-19 losses, a deep racial reckoning, and a divisive election, I strategized over how to uncover more about the technologies agencies were using to make remote decisions.
I had learned enough code to understand how AI models were built after a year-long fellowship at Stanford University’s John S. Knight Journalism Fellowship and Institute for Human-Centered Artificial Intelligence.

As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!
And after looking through public records disclosed through Freedom Of Information Act requests, I knew that if I could team up with my AP colleagues across the world, we could learn more about who these new surveillance and predictive tools were impacting.
I didn’t have an opinion about whether AI was good or bad. I wanted to know how these tools worked and where they were being used, sometimes without the public’s knowledge.
By the next year, I conceived of and launched AP’s Tracked series, aiming to drive coverage of the global impacts of AI on our communities.

Because technology journalism has historically been the purview of a privileged Silicon Valley set, I wanted to build an investigative team that would harness AP’s global footprint and put front and center my colleagues who live in the communities they cover.
By the end of 2022, more than 40 staffers had participated in the Tracked series, including many staffers from outside the U.S., most from backgrounds other than print. Ultimately, more than half of our core Tracked team were journalists of color, and nearly half were women.
Because our content is seen by half the world’s population every day, we deliberately chose to cover AI tools in a way that would draw in a broader audience and shine a light on the people they impact.
That meant producing stories that were audience-friendly, digital forward, and centered on powerful human narratives.
We began reporting with a mass Freedom Of Information Act campaign and soon brought on project manager Sharon Lynch. Our FOIA requests were focused on three main targets, where pre-reporting steered us to investigate whether AI tools were having disparate impacts:
- Just as the balance between privacy and national security shifted after the Sept. 11 terrorist attacks, under COVID-19 how were officials embedding tracking tools in society that would likely last long after pandemic lockdowns?
- Were some U.S. child welfare systems risking hardening racial and other forms of bias by deploying algorithms in key decisions about families’ well-being?
- How were U.S. law enforcement in big cities and small towns alike deploying secret new tools that could track cell phone users’ movements via their consumer data, enabling mass surveillance?
When my editor Ron Nixon and I realized that too few journalists had gotten trained on how these complex statistical models work, we devised internal workshops to build capacity in AI accountability reporting.

Our reporting took place during an unprecedented and challenging two years, as we covered and lived through a pandemic, wars, elections, hurricanes, and mountains of other breaking news. And as we plowed ahead, we ran into some methodological challenges:
- No surprise, FOIA and its equivalents are an imperfect tool and rarely yield raw code.
- Little transparency about the use of AI tools by government agencies can mean public knowledge is severely restricted, even if records are disclosed.
- Viewing predictive and surveillance tools in isolation doesn’t capture their full global influence.
- The purchase and implementation of such technologies isn’t necessarily centralized. Individual state and local agencies may use a surveillance or predictive tool on a free trial basis and never sign a contract. And even if federal agencies license a tool intending to implement it nationwide, that isn’t always rolled out the same way in each jurisdiction.
When we ran into roadblocks or records requests went nowhere, we came up with these alternative approaches to reporting:
- Deepen reporting on who designed, built, and deployed the models.
- Research the applicable contracts, privacy policies, audits, and regulations.
- Collaborate with researchers who are studying these issues programmatically.
- Focus on who may be disparately impacted and/or harmed by the technology and how those impacts may relate to historical biases.
- Seek stories, issues, and applications that transcend the individual, highlight accountability and global connections.

Each investigation required the team to show journalistic dexterity under competitive conditions.
For the global COVID-19 surveillance story, we obtained records with granular details about the health-code apps governments deployed worldwide. Then, we went deeper.
Taiwan-based reporter Huizhong Wu pored over a 100GB database of Chinese government procurement contracts to reveal which COVID-19 tracking technologies were being used and the consumer data that powered them.
In Israel, Josef Federman obtained court documents and government filings revealing details about how Israel’s Arab citizens and residents were being surveilled by the Shin Bet.
In India, Krutika Pathi delved into court records as she investigated Hyderabad police’s use of facial recognition cameras amid the pandemic.
For the investigation into child welfare algorithms, investigative reporter Sally Ho and I obtained documents about predictive tools that local governments use to help predict whether children are sent to foster care. In Allegheny County, Pennsylvania, we studied the outcomes of one such pioneering algorithm.
While we waited for responses to our requests for public records, we located a Carnegie Mellon University team doing a statistical analysis of the local child welfare system’s own data, which found that Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. County officials said that social workers could always override the tool and called the research “hypothetical.”
We later received audits of the tool and other key documents via responses to public records requests, which allowed us to chart the algorithm developers’ influence.
Months later, we received what we had sought via records requests: the specific data points underlying the algorithms in numerous child welfare agencies, which were used to determine risk.
And for a story revealing that U.S. law enforcement agencies were using a little-known cell-phone tracking tool made by a company with no website or public information, we relied on a cache of responsive Freedom of Information Act documents that showed how police were following people’s movements based on hundreds of billions of data points gathered from 250 million mobile devices.
The documents gathered by the Electronic Frontier Foundation, a digital privacy advocacy group, helped explain how the “Fog Reveal” technology worked, but investigative reporter Jason Dearen and I supplemented that with a dataset of governmental spending to reliably report that the company behind the tool had about 40 contracts for its software with nearly two dozen agencies.
In addition to the partnership with the Pulitzer Center, we also teamed up with the University of California Berkeley’s Human Rights Center Investigations Lab and Stanford University’s Starling Lab for Data Integrity to pilot and explore new open-source and authentication strategies.
Photographer and videographer colleagues told us up front that it would be tough to visualize AI tools that are built not to be seen. That informed our approach: to find the human connection to the stories and to make the investigations visually intimate so audiences could invest in a narrative that might, at first blush, not seem to affect them.
Collaborating with in-house experts in illustration and immersive storytelling also helped set the mood and tone of the stories so they would connect with digital audiences.
All told, the Tracked series drove fresh new audiences to our work in English, Spanish, and Arabic, and it gave new urgency to long-standing privacy and civil rights concerns about the use of AI to surveil and harass communities.
Garance Burke, a global investigative journalist for The Associated Press, was the team leader and lead reporter on the Pulitzer Center-supported project Tracked. For more information, reach Burke at [email protected] or @garanceburke.