How Denmark’s Welfare State Became a Surveillance Nightmare

How Denmark’s Welfare State Became a Surveillance Nightmare

Jacobsen also argues that machine learning is fairer than analog methods. Anonymous tips about potential welfare cheats are unreliable, she claims. In 2017, they made up 14 percent of the cases selected for investigation by local fraud officials, whereas cases from her data unit amounted to 26 percent. That means her unit is more effective than anonymous tips, but nearly half of the cases local investigators decide to take on come from their own leads. Random selection is also unfair, she claims, because it means burdening people when there are no grounds for suspicion. “[Critics] say that when the machine is looking at data, it is violating the citizen, [whereas] I might think it’s very violating looking at random citizens,” Jacobsen says. “What is a violation of the citizen, really? Is it a violation that you are in the stomach of the machine, running around in there?”

Denmark isn’t alone in turning to algorithms amid political pressure to crack down on welfare fraud. France adopted the technology in 2010, the Netherlands in 2013, Ireland in 2016, Spain in 2018, Poland in 2021, and Italy in 2022. But it’s the Netherlands that has provided the clearest warning against technological overreach. In 2021, a childcare benefits scandal, in which 20,000 families were wrongly accused of fraud, led to the resignation of the entire Dutch government. It came after officials interpreted small errors, such as a missing signature, as evidence of fraud, and forced welfare recipients to pay back thousands of euros they’d received as benefits payments.

As details of the Dutch scandal emerged, it was found that an algorithm had selected thousands of parents—nearly 70 percent of whom were first or second generation migrants—for investigation. The system was abandoned after the Dutch Data Protection Authority found that it had illegally used nationality as a variable, which Amnesty International later compared to “digital ethnic profiling.” 

The EU’s AI Act would ban any system covered by the legislation that “exploits the vulnerabilities of a specific group,” including those who are vulnerable because of their financial situation. Systems like Jacobsen’s, which affect citizens’ access to essential public services, would also likely be labeled as “high risk” and subject to stringent requirements, including transparency obligations and a requirement for “high levels of accuracy.”

The documents obtained by Lighthouse Reports and WIRED appear to show that Denmark’s system goes beyond the one that brought down the Dutch government. They reveal how Denmark’s algorithms use variables like nationality, whose use has been equated with ethnic profiling.

One of Denmark's fraud detection algorithms attempts to work out how someone might be connected to a non-EU country. Heavily redacted documents show that, in order to do this, the system tracks whether a welfare recipient or their “family relations” have ever emigrated from Denmark. Two other variables record their nationality and whether they have ever been a citizen of any country other than Denmark.

Jacobsen says that nationality is only one of many variables used by the algorithm, and that a welfare recipient will not be flagged unless they live at a “suspicious address” and the system isn’t able to find a connection to Denmark. 

The documents also show that Denmark’s data mining unit tracks welfare recipients’ marital status, the length of their marriage, who they live with, the size of their house, their income, whether they’ve ever lived outside Denmark, their call history with the Public Benefits Administration, and whether their children are Danish residents. 

Add a Comment