“While it’s clearly important to try to detect and prevent genuine fraud, this needs to be balanced against the risk of people being unfairly denied vital financial support,” he said.
“It’s therefore positive to hear that the DWP is no longer routinely suspending claims flagged by their AI-powered fraud detection. However too many people are still struggling to access the support they need, while feeling under constant suspicion by the DWP.”
There are already mistakes in the welfare system. People are sometimes wrongly refused benefits, and some have been incorrectly sanctioned. The Big Issue reported some of these cases, including a blind woman who had her disability benefits wrongly slashed and a woman who was being falsely accused of owing the DWP more than £12,000.
Adding AI into the mix could compound these risks, warned Steve Kuncewicz, a lawyer specialising in data and privacy.
“[This decision] seems to be a significant recognition of the risk that can come about from AI being rolled out without very careful diligence,” he said.
The technology could ultimately be useful for detecting fraud, he said – but only after careful consideration of the risks.
A lack of human contact can also create problems for people attempting to access financial help, warned Michael Clarke, head of information programmes at Turn2us.
The decision to stop automatic suspension is a “positive step”, he said, but added that “concerns remain about the pressure on claimants to quickly engage with investigators.”
“We urge the DWP to make this change permanent and to continue improving the system with transparent, accountable practices and essential human oversight,” he concluded.
How does the DWP use AI?
The department has not been clear on just how much it uses AI. It has previously declined to publish the results of an equalities assessment around using machine learning technology.
The roll-back of the automatic suspension policy doesn’t mean the end of AI in the DWP. In December, the department purchased an automated ‘social media listening’ tool that will be able to search claimants’ profiles for specified terms and flag up thousands of individuals’ posts each day.
But the Horizon postmaster’s scandal has brought increasing attention to the risks of automation. Between 2000 and 2014, more than 700 post office branch managers across the UK were prosecuted on the basis of a faulty IT system called Horizon, which made it look like innocent postmasters had stolen money.
While not specifically involving AI, the scandal is a testament to the importance of human oversight, said Kuncewicz.
“Horizon led to very negative results – and that was a system with a fair amount of human intervention. It sounds like this [the AI system] may have had even less,” he said.
“So it’s good they’re pausing it, going away, and hopefully properly assessing what the risks might be.”
When asked if the technology could lead to a situation similar to the Horizon scandal, the DWP official Neil Couling said: “I really hope not.”
A DWP spokesperson said that humans have – and always have had – the final say in claims.
“The department continues to explore the potential of new technologies in combatting fraud but we have always been consistent that a member of staff will always make the final decision to determine fraud or error,” they said.
The rollback of AI is welcome, said Jamie Thunder, policy and public affairs officer at Z2K. But all suspensions can be devastating, regardless of whether they are issued by a human or an AI bot.
“We see people who have a universal credit claims suspended by the review team. And that suspension can go on for months, even while the claimant is providing the evidence that apparently they need to be providing,” he said.
“Because it’s a suspension, not a decision to close the claim, you can’t challenge that formally. Which really puts someone in a in a catch-22 position. If AI is part of that decision, we certainly have concerns. But even if it isn’t, this is a this is a really unfair way to run a system.”
Trust in the DWP is subsequently “quite low” – and the use of AI could further erode it, Thunder warned.
“The decisions that the department makes have a huge impact on every aspect of people’s lives. It affects their money, it can affect their housing, it can affect their relationships, ultimately, it can affect their health,” he added.
“Rolling out AI without transparency on how it works, adds to the fear that many people already have of the department. There’s already this sense that someone, in some office somewhere in Caxton House, is going to make a decision, and it could ruin my life.”
“AI really shouldn’t be used for something that has a direct negative effect on someone’s claim.”