Translate page with Google

Story Publication logo March 7, 2023

How Denmark’s Welfare State Became a Surveillance Nightmare


three illustrated file cabinets

Every year, millions of welfare benefits recipients across Europe are profiled for fraud by opaque...


Once praised for its generous social safety net, the country now collects troves of data on welfare claimants.

In a sparsely decorated corner office of the Danish Public Benefits Administration sits one of Denmark’s most quietly influential people. Annika Jacobsen is the head of the agency’s data mining unit, which, over the past eight years, has conducted a vast experiment in automated bureaucracy. Blunt, and with a habit of completing others’ sentences, Jacobsen is clear about her mission: “I’m here to catch cheaters.”

Denmark’s Public Benefits Administration employs hundreds of people who oversee one of the world's most well-funded welfare states. The country spends 26 percent of its GDP on benefits—more than Sweden, the United States, and the United Kingdom. It’s been hailed as a leading example of how governments can support their most vulnerable citizens. Bernie Sanders, the US senator, called the Nordic nation of 6 million people a model for how countries should approach welfare.

As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!

But over the past decade, the scale of Denmark’s benefits spending has come under intense scrutiny, and the perceived scourge of welfare fraud is now at the top of the country’s political agenda. Armed with questionable data on the amount of benefits fraud taking place, conservative politicians have turned Denmark’s famed safety net into a polarizing political battleground.

It has become an article of faith among the country’s right-wing politicians that Denmark is losing hundreds of millions of euros to benefits fraud each year. In 2011, KMD, one of Denmark’s largest IT companies, estimated that up to 5 percent of all welfare payments in the country were fraudulent. KMD’s estimates would make the Nordic nation an outlier, and its findings have been criticized by some academics. In France, it’s estimated that fraud amounts to 0.39 percent of all benefits paid. A similar estimate made in the Netherlands in 2016 by broadcaster RTL found the average amount of fraud per benefit payment was €‎17 ($18), or just 0.2 percent of total benefits payments.

The perception of widespread welfare fraud has empowered Jacobsen to establish one of the most sophisticated and far-reaching fraud detection systems in the world. She has tripled the number of state databases her agency can access from three to nine, compiling information on people’s taxes, homes, cars, relationships, employers, travel, and citizenship. Her agency has developed an array of machine learning models to analyze this data and predict who may be cheating the system.

Documents obtained by Lighthouse Reports and WIRED through freedom-of-information requests show how Denmark is building algorithms to profile benefits recipients based on everything from their nationality to whom they may be sleeping next to at night. They reveal a system where technology and political agendas have become entwined, with potentially dangerous consequences.

Danish human rights groups such as Justitia describe the agency’s expansion as “systematic surveillance” and disproportionate to the scale of welfare fraud. Denmark's system has yet to be challenged under EU law. Whether the country’s experiments with machine learning cross a legal line is a question that could be answered by the European Union’s landmark Artificial Intelligence Act, proposed legislation that aims to safeguard human rights against emerging technologies.

THE DEBATE ABOUT welfare in Denmark changed in October 2012, when officials asked residents to send in photos of suspected welfare cheats in their local area. The call led some left-leaning commentators to warn of a “war on welfare,” and arrived as the far-right Danish People’s Party—which criticized the government for “luring” immigrants with welfare benefits—rose up in opinion polls.

Within a year, consulting firm Deloitte released an audit of welfare fraud controls in Denmark, finding them inadequate to detect fraud in an increasingly digitized welfare system. The auditors, commissioned by the Danish finance ministry, estimated the “short-term savings” of a new “risk-scoring infrastructure” to be €126 million.

Deloitte’s vision was realized in February 2015 with a bill that overhauled the Danish welfare state. It proposed a massive expansion of the Public Benefit Administration’s powers, including the ability to store and collect data on millions of people, access other authorities’ databases, and even request data from foreign governments. Largely unnoticed at the time, it also called for the creation of a “data mining unit” to “control for social benefits fraud.”

“You are not guilty just because we point you out. There will always be a person that looks into your data.”


The bill was backed by all of the major political parties in Denmark and became law in April 2015. That month, Jacobsen left an IT job in the financial sector to become Denmark’s first head of data mining and fraud detection. 

As Jacobsen got to work, conservative politician Troels Lund Poulsen took office in June 2015 as Denmark’s new employment minister. He implemented random airport checks to catch welfare recipients taking undeclared vacations, and proposed giving the new data mining unit access to welfare recipients’ electricity and water bills in order to detect where they were living. He was joined by a growing chorus of supporters, with one municipality reportedly asking for data from cell towers to track where welfare recipients were staying. “It’s about politics,” Poulsen said in March 2018. “It is important for me to send a clear signal that we will not accept social cheating and fraud.”

Jacobsen’s critics have accused her unit of conducting mass surveillance, but she argues that there are clear safeguards that prevent overreach. Jacobsen says her algorithms don’t actually cancel benefits—they only flag people as suspicious. Ultimately, it is up to a human fraud investigator to make the final call, and citizens have the right to appeal their decisions. “You are not guilty just because we point you out. There will always be a person that looks into your data,” she says.

The majority of Danish residents flagged for investigation are found innocent. Of the nearly 50,000 cases selected by the data mining unit in 2022, 4,000, or 8 percent, resulted in some form of punishment. In the cases where wrongdoing was found, the data mining unit has managed to recover €‎23.1 million—a significant return on its annual budget of €‎3.1 million.

But the scale and reach of Denmark's data collection has been criticized by the Danish Institute of Human Rights, an independent human rights watchdog, and the Danish Data Protection Authority, a public body that enforces privacy regulations. Justitia has compared the Public Benefits Administration to the National Security Agency in the US, and claimed that its digital monitoring of millions of Danish residents violates their privacy rights.

Jacobsen says the agency’s use of data is proportional under European data protection laws, and that preventing error and fraud is important to maintain trust in the welfare state. The Public Benefits Administration is also looking to have its algorithms check citizens earlier in the process, when they first apply for benefits, to avoid situations where they have to repay large sums of money. “Most citizens are honest; however, there will always be some citizens who try to get welfare benefits that they are not entitled to,” Jacobsen says. 

Jacobsen also argues that machine learning is fairer than analog methods. Anonymous tips about potential welfare cheats are unreliable, she claims. In 2017, they made up 14 percent of the cases selected for investigation by local fraud officials, whereas cases from her data unit amounted to 26 percent. That means her unit is more effective than anonymous tips, but nearly half of the cases local investigators decide to take on come from their own leads. Random selection is also unfair, she claims, because it means burdening people when there are no grounds for suspicion. “[Critics] say that when the machine is looking at data, it is violating the citizen, [whereas] I might think it’s very violating looking at random citizens,” Jacobsen says. “What is a violation of the citizen, really? Is it a violation that you are in the stomach of the machine, running around in there?”

Denmark isn’t alone in turning to algorithms amid political pressure to crack down on welfare fraud. France adopted the technology in 2010, the Netherlands in 2013, Ireland in 2016, Spain in 2018, Poland in 2021, and Italy in 2022. But it’s the Netherlands that has provided the clearest warning against technological overreach. In 2021, a childcare benefits scandal, in which 20,000 families were wrongly accused of fraud, led to the resignation of the entire Dutch government. It came after officials interpreted small errors, such as a missing signature, as evidence of fraud, and forced welfare recipients to pay back thousands of euros they’d received as benefits payments.

As details of the Dutch scandal emerged, it was found that an algorithm had selected thousands of parents—nearly 70 percent of whom were first or second generation migrants—for investigation. The system was abandoned after the Dutch Data Protection Authority found that it had illegally used nationality as a variable, which Amnesty International later compared to “digital ethnic profiling.” 

The EU’s AI Act would ban any system covered by the legislation that “exploits the vulnerabilities of a specific group,” including those who are vulnerable because of their financial situation. Systems like Jacobsen’s, which affect citizens’ access to essential public services, would also likely be labeled as “high risk” and subject to stringent requirements, including transparency obligations and a requirement for “high levels of accuracy.”

The documents obtained by Lighthouse Reports and WIRED appear to show that Denmark’s system goes beyond the one that brought down the Dutch government. They reveal how Denmark’s algorithms use variables like nationality, whose use has been equated with ethnic profiling.

One of Denmark's fraud detection algorithms attempts to work out how someone might be connected to a non-EU country. Heavily redacted documents show that, in order to do this, the system tracks whether a welfare recipient or their “family relations” have ever emigrated from Denmark. Two other variables record their nationality and whether they have ever been a citizen of any country other than Denmark.

Jacobsen says that nationality is only one of many variables used by the algorithm, and that a welfare recipient will not be flagged unless they live at a “suspicious address” and the system isn’t able to find a connection to Denmark. 

The documents also show that Denmark’s data mining unit tracks welfare recipients’ marital status, the length of their marriage, who they live with, the size of their house, their income, whether they’ve ever lived outside Denmark, their call history with the Public Benefits Administration, and whether their children are Danish residents.

Another variable, “presumed partner,” is used to determine whether someone has a concealed relationship, since single people receive more benefits. This involves searching data for connections between welfare recipients and other Danish residents, such as whether they have lived at the same address or raised children together. 

“The ideology that underlies these algorithmic systems, and [the] very intrusive surveillance and monitoring of people who receive welfare, is a deep suspicion of the poor,” says Victoria Adelmant, director of the Digital Welfare and Human Rights Project.

FOR ALL THE complexity of machine learning models, and all the data amassed and processed, there is still a person with a decision to make at the hard end of fraud controls. This is the fail-safe, Jacobsen argues, but it’s also the first place where these systems collide with reality.

Morten Bruun Jonassen is one of these fail-safes. A former police officer, he leads Copenhagen's control team, a group of officials tasked with ensuring that the city’s residents are registered at the correct address and receive the correct benefits payments. He's been working for the city’s social services department for 14 years, long enough to remember a time before algorithms assumed such importance—and long enough to have observed the change of tone in the national conversation on welfare.

“What is a violation of the citizen, really? Is it a violation that you are in the stomach of the machine, running around in there?”


While the war on welfare fraud remains politically popular in Denmark, Jonassen says only a “very small” number of the cases he encounters involve actual fraud. For all the investment in it, the data mining unit is not his best source of leads, and cases flagged by Jacobsen’s system make up just 13 percent of the cases his team investigates—half the national average. Since 2018, Jonassen and his unit have softened their approach compared to other units in Denmark, which tend to be tougher on fraud, he says. In a case documented in 2019 by DR, Denmark’s public broadcaster, a welfare recipient said that investigators had trawled her social media to see whether she was in a relationship before wrongfully accusing her of welfare fraud.

While he gives credit to Jacobsen’s data mining unit for trying to improve its algorithms, Jonassen has yet to see significant improvement for the cases he handles. “Basically, it’s not been better,” he says. In a 2022 survey of Denmark’s towns and cities conducted by the unit, officials scored their satisfaction with it, on average, between 4 and 5 out of 7.

Jonassen says people claiming benefits should get what they’re due—no more, no less. And despite the scale of Jacobsen’s automated bureaucracy, he starts more investigations based on tips from schools and social workers than machine-flagged cases. And, crucially, he says, he works hard to understand the people claiming benefits and the difficult situations they find themselves in. “If you look at statistics and just look at the screen,” he says, “you don’t see that there are people behind it.” 

Additional reporting by Daniel Howden, Soizic Penicaud, Pablo Jiménez Arandia, and Htet Aung. Reporting was supported by the Pulitzer Center’s AI Accountability Fellowship and the Center for Artistic Inquiry and Reporting.

This story is part of a joint investigation between Lighthouse Reports and WIRED. To read other stories from the series, click here.





an orange halftone illustration of a hand underneath a drone


AI Accountability

AI Accountability


Logo: The AI Accountability Network


AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues