Translate page with Google

Over the past year, Gabriel Geiger has lead an investigation with Lighthouse Reports and partners into the deployment of predictive welfare fraud algorithms across Europe. With five focus countries, they sent freedom of information requests for materials like source code, data, and technical documentation, and supported citizens in sending GDPR requests to see how their own data has been processed.

What they found were algorithms that process vast troves of data on benefits recipients, including everything from their mental health to their language fluency to the clothes they wear and the nature of their sexual relations. The risk assessments produced by these algorithms have life changing implications for the people being scored even though they themselves have no knowledge of how it is calculated.

Through painstaking FOIA work, they gathered the required documentation to technically audit the deployment of one such machine learning algorithm for bias.

Cover image by Anton Grabolle / Better Images of AI / Classification Cupboard / CC-BY 4.0.

SECTIONS

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network
teal halftone illustration of two hands shaking and a scale holding dollar bills

Initiative

Transparency and Governance

Transparency and Governance

RELATED TOPICS

Criminal Justice

Topic

Criminal Justice

Criminal Justice
an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability