Dr Robot will see you now

Hospital doctor turned technology advocate Matt Fenech tells FYi about his future vision for artificial intelligence in medicine

  • Date: 26 July 2018

Matt fenechWHEN we think of artificial intelligence in healthcare, it may conjure up unsettling images of faceless robot doctors treating patients in cold whitewashed clinics. But despite once believing robots would make better doctors than humans, Dr Matt Fenech is certain that it will be a long time before machines can emulate the empathy, warmth and compassion shown by flesh and blood clinicians.

The former NHS research doctor hung up his white coat after 10 years for a career in policy and research and now works as artificial intelligence (AI) project lead at London-based think tank Future Advocacy.

Matt (pictured above) sees great potential benefits for technology in healthcare and says we shouldn’t be threatened by computers or view them as replacements for human doctors.

He recently published a report with Future Advocacy which examines how AI can be (and is already being) used in healthcare, with a firm focus on “avoiding the overhyping and under-delivering”. It identifies areas where AI can be introduced in healthcare, such as chatbots and personalised health advice, as well as ethical, social and political challenges, including the sharing of personal patient data and the potential to exacerbate health inequalities.

Matt says: “We realise there are great opportunities but there are also risks, so we need to have good policies to mitigate these risks. We are trying to develop the best possible policies by speaking to businesses in the private sector, academics (including computer scientists and philosophers), governments, and the general public.”

The mere idea of AI in healthcare has raised many eyebrows. Professor Stephen Hawking once said that “AI is likely to be either the best or worst thing to happen to humanity”. Matt himself divided opinion a few years ago when he wrote in a blog that “robots would make better doctors than human beings”. But he is quick to emphasise that his opinion has since changed.

“I wrote that before we started the [AI in Healthcare] project,” he says. “Healthcare is not just making a diagnosis and prescribing a treatment (the robot may be better at that). For the more nurturing aspects of healthcare, I see no evidence that robots are even close to what a compassionate human being can do. The best healthcare is going to be achieved by a combination of technology and humans.”

Direct benefits

The definition of AI is hard to pin down. Matt describes it in the broadest sense as “having a computer programme to solve problems”, something that is already used in healthcare in the background. But, he says, the more obvious, “in-your-face tools” are just beginning.

During a recent field trip to Alder Hey Children’s Hospital in Liverpool Matt saw for himself how AI is directly benefiting patients. Children admitted for treatment can now download an app to their smartphone or tablet that offers access to a specially designed piece of AI tech: chatbot “Oli the elephant”. Oli has been programmed to answer commonly asked questions about hospital stays in a way that is easily understandable to children. As well as answering queries such as “what will my operation be like?” and “what happens during a blood test?”, the app uses a reward system after procedures to encourage children to engage with care.

Matt says: “I was very excited by this technology because doctors don’t always have as much time as we’d like to answer patients’ questions. Having an alternative way of helping them is a good thing.”

Managing risks

Oli has been well received but, as with all AI, there are ethical questions to consider. Do users know they are speaking to a robot rather than a human being? And what happens when something of a sensitive nature is asked? In Oli’s case, the young patients are advised to speak to a parent or healthcare professional.

The Future Advocacy report identifies three overarching ethical themes in the use of AI in healthcare: consent, fairness and rights. It raises questions such as how users can give meaningful consent to an AI where there may be an element of autonomy in the algorithm’s decisions, or where we do not fully understand these decisions. And who will be held responsible for algorithmic errors? It asks whether these technologies will help eradicate or exacerbate existing health inequalities and how to ensure they are not only accessible to wealthier patient groups. It also wonders whether future patients will have the right not to have AI involved in their care at all.

The report stresses that tools must be developed to address “real-world patient and clinician needs” and to ensure that the voices of patients and relatives are heard.

Better together

Despite his belief in the potential of AI, Matt says robots won’t be replacing doctors and nurses “anytime soon” but he doesn’t rule out such progress in the next few decades.

His future vision is a positive one: “We want to combine what doctors are good at (the empathy, the negotiation, the communication) with what machines are good at (like number crunching, data analysis, and the speed of doing so). We want to use the right tool in the right situation.”

Reflecting on his past work as an endocrinologist and diabetes specialist, Matt says there were often times when AI could have benefited both him and his patients.

The London-based medic, who moved to the UK from Malta 12 years ago, explains: “Having a quick data-analytic tool to compare blood tests and identify trends would have been much better, particularly in the diabetes clinic where patients bring their blood sugar results and I would spend half the 10-minute consultation looking over them. If I could’ve fed this into an AI algorithm, for example, it would have freed up time to speak to my patient.”

Matt also sees a role for AI in reducing doctors’ workloads.

He says: “One of the reasons I left frontline care was because I didn’t have enough time to communicate with patients. The pressure is such that you get five minutes to talk to someone with a very complex condition, which is never going to be enough. I constantly felt I was playing catch up and never doing a good job. “AI technology could help with those aspects.”

Often the arrival of AI is criticised for taking away face-to-face interaction between doctor and patient when, in fact, it could enhance it. “That is the optimistic view and I think we can get there,” says Matt, “but there is also a dystopian view where people sit at home on a computer and a robot talks to you and you never see a doctor or nurse. The technology could do that but I don’t think that would be the best approach, nor would people want that.”

However, there are circumstances when patients may rather speak to a robot than a human being, particularly in mental health.

“The most important thing in the use of AI is to offer people choice,” he adds. “The potential is huge.”

Kristin Ballantyne is a freelance writer based in Glasgow

PHOTO: MATT FENECH

This page was correct at the time of publication. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.

Read more from this issue of FYi

FYi is published twice a year and distributed to MDDUS members in Foundation Year 1 and Foundation Year 2 training programmes and final year medical students throughout the UK. It provides a mix of articles on risk, medico-legal and regulatory matters as well as general features and profiles of interest to trainee doctors. Browse all current and back issues below.
In this issue
.

Related Content

Coroner's inquests

Claims for clinical negligence

Provision of death certification

Save this article

Save this article to a list of favourite articles which members can access in their account.

Save to library

For registration, or any login issues, please visit our login page.