Politicians are well aware of the risks of AI – government departments have published their own research on the topic. But it seems ministers are considering pushing ahead with plans to use AI in public services.
A 2023 white paper on the topic showed a reluctance to increase AI regulation and called for a “pro-innovation” approach. Meanwhile, deputy prime minister Oliver Dowden has also been touting the “huge potential” in using AI to drive economic growth.
So what are the real risks of the DWP using AI in the benefits system? Are there any potential benefits? And how likely is it to actually happen? We asked a lawyer, public policy expert, activist, academic and charities for their thoughts.
What are the risks of AI for benefits claimants?
Steve Kuncewicz, a lawyer specialising in data protection at solicitors Glaisyers, explained there could be significant human impact if the government rolled out AI in the welfare system too quickly and at scale.
“The risk is benefit payments not being made against the cost of living crisis in relation to some incredibly vulnerable people,” he said. “That risk could be really, really significant.”
“We’re talking about the ability of benefit claimants to feed and heat themselves,” Kuncewicz added. “Whenever we look at data protection and privacy issues over the course of the last few years, a lot of it has been around Facebook tracking you or Google ads. This is way more fundamental.”
There are already errors in the welfare system which mean people are mistakenly refused benefits, but adding inaccurate AI into the mix could make matters worse.
A lack of human contact could also create further problems for people attempting to access financial help, many of whom will be vulnerable and marginalised.
Tom Pollard, head of social policy at the New Economics Foundation, added: “Although using AI to automate parts of the social security system might be attractive to the DWP, they need to be very cautious about the potential negative impact on those being supported.
“People already often feel disempowered and overwhelmed when interacting with the system. The use of opaque algorithms risks making processes and decisions less transparent, when the focus should be on ensuring they are more understandable.”
This transparency is key. A disability activist and benefits claimant, who goes by the name of Ben Claimant, said: “AI is a difficult subject for a claimant as we don’t actually know what is going on in the DWP. The decision-making is kept private and in general they are not transparent, so it is hard to know how AI is being used.”
Ayla Ozmen, director of policy and campaigns at Z2K, added: “Although AI could have a role to play in the social security system in the future, there is a real risk that greater use of AI and automation leads to even more opaque and incorrect decisions being made by DWP.
“Any use of AI by DWP must be transparent, challengeable, and demonstrably free from bias – and the final decision must rest with a decision maker, not an algorithm. Crucially, DWP must take great care when any decision could lead to someone being denied money they need to survive.
“Yet DWP’s track record in this area gives us no confidence this will happen. We have seen clients left in limbo for months while the unaccountable and uncontactable enhanced review team has suspended their universal credit claim, yet without a decision we can’t formally challenge it.
“DWP needs to sort out its current decision making processes that leave people in deep poverty and destitution, and denied their legal entitlements, before extending the use of AI.”
How does automated universal credit impact claimants?
There are already errors being made by the DWP when using automated systems (which is different from AI, but helps illustrate similar issues) to calculate universal credit payments.
Morgan Currie, a senior lecturer in data and society at the University of Edinburgh who has conducted research on automated universal credit, explained: “We found that people who receive universal credit often didn’t understand their payment calculation, which is automated, or why they had certain amounts deducted from their payment, and there were times when universal credit staff couldn’t explain the payment calculation to them either.
“We also found that employed people frequently experienced errors in their payment – people told us they found it frustrating that they couldn’t have in-person meetings or phone calls with universal credit staff to quickly address these errors – rather, they’d be directed to write to case managers through an online account.”
One claimant told Currie: “It’s very impersonal. You’re just a statistic to the DWP and the universal credit people, you’re just a number to them, you’re not a human at the end of the day. I feel like there’s no getting your circumstances across through putting a message in your journal.”
Are there any benefits to the DWP using AI in the welfare system?
In ideal circumstances, AI would mean a much more efficient benefits system which helps people.
Michael Clarke, Head of Information Programmes at Turn2us, said: “Clearly there are potential benefits for users in terms of speedier decision making about claims and consistent application of criteria.
“Our main concern would be that whoever makes these decisions is fully transparent about the technology involved. Further things to consider are the scope of AI involvement in decision making, the details of how the AI is trained and, crucially, that there are human checks and balances in place to deal with mistakes and their consequences.
“If there is also a comprehensive review and appeals process it will mean that people can have some faith in the system. People’s lives and wellbeing should be at the centre of all these decisions.”
The DWP already makes mistakes over benefits decisions, as The Big Issue has extensively reported. Around 83% of DWP decisions about personal independence payments (PIP) are overturned when a claimant appeals it at tribunal – essentially an admission that the original decision was a mistake.
A part of this is down to the backlog and workload of the assessors who are asked to make decisions with time limits and targets. Some former assessors told The Big Issue they faced so much pressure to meet targets that they were driven to “panic attacks”, and they believed this led to mistakes.
Using technology such as AI would improve efficiency and help with backlogs, while easing the pressure on staff, but the technology has to be right, accurate and have no bias.
Pollard added: “DWP should prioritise building supportive, trusting relationships between staff and people supported by social security, as this has been shown by their own evidence to be the most effective route to helping people overcome the barriers they face.
“It is also vital that any use of AI does not simply repeat and cement existing prejudices in the system, further disadvantaging marginalised groups such as disabled people or people who don’t have English as a first language.”
For those people who have lost trust in the DWP, it’s hard to envisage a world where it uses AI to help vulnerable people.
Ben Claimant said: “AI could be used to benefit people. It could be used to make sure we are getting all the benefits we deserve. It could be used to support claimants in finding work. It could be used to make the lives of disabled people better.
“There are so many positive possibilities but the DWP is not exploring any of that. They just want to catch us out to stop our benefit income. Using AI in such a negative manner might save money short term but it isn’t helping society. It doesn’t help the economy. Like sanctions, AI used negatively will likely cost more than it saves.”
Could AI actually be used by the DWP in the near future?
AI is already being used by the DWP to detect fraud, and there are automated systems used around eligibility for benefits. But Kuncewicz believes that, legally, it will be difficult for the government to implement AI on scale for quite some time because of the risks it poses to human rights.
“Anything’s a legal possibility,” he said. “But an awful lot of work needs to be done.”
Technology has come a long way, even just this year, but there are still inaccuracies with AI, meaning that the government could find itself in a legal predicament if it used the technology in a way which had a negative impact on people’s lives.
“It’s the idea that you might be using an AI system based on information that isn’t quite right,” Kuncewicz explained. “Even a government with the best will in the world with unlimited resources to throw at it, not all the data is as accurate as they would like it to be.”
But there are concerns that the government could roll it out too quickly anyway. “You can understand the allure of it,” Kuncewicz added. “Especially around election season, benefit payments are always quite hot potato. And if all of a sudden, there’s the suggestion that AI could underpin government efficiency.
“There will always be a worry that technology is adopted very quickly. But having said that, I think the government is well aware that there are a lot of hoops they’d have to jump through before any of this comes anywhere near a reality.”
There’s also the law, which does have powers to hold the government to account. “What we have seen recently is a real uptick in claims based on the misuse of personal data,” Kuncewicz said. “We’ve seen them against the government.
“One of the leading cases in this area of law dealt with the fact that a spreadsheet involving the details of asylum seekers in the UK was made public on the internet for about two weeks. That was a seminal case which led to a claim against the Home Office.
“Data protection claims are tougher to bring than they used to be. But in circumstances like this, you could imagine that you’d be looking at a group of people who’ve all got a similar kind of interest. You could see law firms gearing up to try and put big claims together.
“You’ve then got the Information Commissioner sitting behind all this as the privacy regulator. The Information Commissioner has absolutely no problem holding the government to account on issues like this.”
Do you have a story to tell or opinions to share about this? We want to hear from you. Get in touch and tell us more.