Advertisement
News

Matt Hancock’s artificially intelligent NHS needs careful scrutiny

With preventative medicine a hot topic, we look at the health warnings that should come attached to AI solutions

Health Secretary Matt Hancock went all out on prevention last week, detailing his ‘better than cure’ vision for the NHS.

“Securing our nation’s health requires a significant and sustained effort to prevent illness and support good physical and mental health,” he said. “We need to see a greater investment in prevention.”

Clearly, Hancock has taken a leaf out of The Big Issue’s book, picking up our ongoing call for preventative actions to make a better tomorrow. But at the heart of Hancock’s preventative scheme lies a focus on emerging technologies for what he calls ‘predictive prevention’ – emerging technologies that may be uneasy bedfellows with a very human health service.

Using digital services like patient-targeted apps, along with artificial intelligence (AI) platforms designed to predict and advise on disease prevention, Hancock wants technology front and centre of the NHS battleplan.

“Advances in artificial intelligence are helping staff to diagnose and treat conditions quicker and more effectively,” Hancock said. “We are testing new ways of providing people with preventative advice, using cutting-edge technology.”

The rapid advancement of AI in the healthcare industry over the last decade has been extraordinary. IBM’s Watson platform works with doctors in sifting through thousands of cancer reports to identify best treatment courses, and Google’s DeepMind technology uses vast amounts of personal patient data to identify diseases more efficiently than any human doctor could. But that doesn’t necessarily mean more effectively.

Advertisement
Advertisement

Hancock, taking his lead from these technology giants, envisages AI playing a central role in prevention and diagnosis. Doctors could deploy systems that monitor if a patient is taking their correct prescriptions. But these platforms are not without their faults, and although it’s still early days, incidents with both Watson and DeepMind have highlighted ethical issues with unfiltered implementation of AI services.

In July, internal IBM documents obtained by health news site Stat alleged that Watson was recommending “unsafe and incorrect” cancer treatments in certain cases. DeepMind, on the other hand, landed London hospital The Royal Free in hot water last year after the hospital broke privacy laws by handing over the personal data of 1.6 million patients for DeepMind’s AI-powered Streams app.

Bias

AI’s problems are not only limited to healthcare, and the technology’s major criticism right now is that of bias. There are countless instances of bias ruling AI platforms. PredPol is an AI algorithm used in the United States that predicts when and where crimes will happen. In 2016, PredPol was found to be leading police to neighbourhoods with a higher proportion of minorities, despite the crime rate in those areas telling a different story.

This bias, borne out of the natural prejudices in data input by humans, is only amplified when left to an automated system, as proven by examples like PredPol. Add a purposeful desire to save money into the mix, and we’re left with potentially dangerous consequences. We need only look to Volkswagen’s doctoring of AI software to cheat diesel emissions testing to see how AI in the control of private companies can lead to undesirable outcomes.

Discussing the ethics of bias in AI, David Magnus, director of the Stanford Center for Biomedical Ethics, writes on the Stanford Medicine blog that “you can easily imagine that the algorithms being built into the healthcare system might be reflective of different, conflicting interests”.

“What if an algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?” he asks.

Opponents are determined to ensure any such rollout of AI in the NHS is monitored closely.

The implementation of AI must be examined, monitored, improved and kept in check.

A Lords committee report examining the ethical and social implications of AI concluded earlier this year that it should not be acceptable to deploy any AI system that could have a substantial impact on an individual’s life “unless it can generate a full and satisfactory explanation for the decisions it will take”.

When looking at the handling of sensitive data, the report said there must be no repeat of the Royal Free controversy: “Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI. ”

As the NHS continues to endure financial and staffing challenges, and while private organisations, like Google, are introduced, it doesn’t take a stretch of the imagination to understand how AI could be manipulated to save money. This will be at the cost of personalised, human care that human doctors provide (the NHS has already used AI secretaries in one hospital this year, with the government touting a £220,000 saving).

Prevention should be at the heart of healthcare; it’s a proven method of ensuring patient wellbeing by focusing on early stages. But like any new technology, the implementation of AI must be examined, monitored, improved and kept in check. If the government allows the same unchecked mishandling of mass AI rollout that led to the racial, class and economic bias seen around the world, Orwellian means will dictate NHS healthcare policy.

Ben Sullivan is The Big Issue Digital Editor.

Illustration: Joshua Harrison

Advertisement

Support the Big Issue

For over 30 years, the Big Issue has been committed to ending poverty in the UK. In 2024, our work is needed more than ever. Find out how you can support the Big Issue today.
Vendor martin Hawes

Recommended for you

Read All
Thousands could be owed up to £12k from DWP in backdated PIP payments. Here's what you need to know
money/ pip backdated payments
Social Justice

Thousands could be owed up to £12k from DWP in backdated PIP payments. Here's what you need to know

'Consign it to history': Sadiq Khan pledges to end rough sleeping in London by 2030
Mayor of London Sadiq Khan is calling for more support for renters
Rough Sleeping

'Consign it to history': Sadiq Khan pledges to end rough sleeping in London by 2030

Millions of families on universal credit worse off by £1,400 a year, experts say: 'It needs to change'
universal credit
Universal credit

Millions of families on universal credit worse off by £1,400 a year, experts say: 'It needs to change'

It's five years and four PMs since Tory vow to ban no-fault evictions. Why are we still waiting?
Theresa May announced no-fault evictions would be scrapped
RENTING

It's five years and four PMs since Tory vow to ban no-fault evictions. Why are we still waiting?

Most Popular

Read All
Renters pay their landlords' buy-to-let mortgages, so they should get a share of the profits
Renters: A mortgage lender's window advertising buy-to-let products
1.

Renters pay their landlords' buy-to-let mortgages, so they should get a share of the profits

Exclusive: Disabled people are 'set up to fail' by the DWP in target-driven disability benefits system, whistleblowers reveal
Pound coins on a piece of paper with disability living allowancve
2.

Exclusive: Disabled people are 'set up to fail' by the DWP in target-driven disability benefits system, whistleblowers reveal

Cost of living payment 2024: Where to get help now the scheme is over
next dwp cost of living payment 2023
3.

Cost of living payment 2024: Where to get help now the scheme is over

Citroën Ami: the tiny electric vehicle driving change with The Big Issue
4.

Citroën Ami: the tiny electric vehicle driving change with The Big Issue