Opinion: The risks of AI in remote medical consulting

Opinion: The risks of AI in remote medical consulting Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


With the increasing use of AI in healthcare, a warning from medical union MDDUS of the ‘inherent risks in remote consulting’ offers a timely reminder of the potential dangers.

Following our story yesterday on Samsung’s partnership with Babylon Health, an NHS consultant reached out with his (puzzling) experience of Babylon’s technology:

In the video, the AI is told ‘I have a nose bleed’ – a symptom which a medical professional would be able to diagnose quickly. What follows is over two minutes of bizarre questions resulting in the AI calling the symptoms ‘quite complex’ and failing to offer any possible causes.

The service will connect a patient to a qualified doctor for treatment options and prescriptions, but it’s expected to list potential causes for presented symptoms to save the inadvisable Googling most of us have tried.

AI in healthcare is supposed to improve efficiency and reduce the burden on primary care services. In this situation, the patient would have struggled with their nose bleed answering many questions for a failed diagnosis which should be trivial. In the end, they’d have to book a GP appointment anyway.

These services make their money off paid appointments, so there’s always going to be the concern a patient could be led on a futile journey only to be told they need to pay for an appointment. A person would be using the service because they’re concerned, so the potential for exploitation is high.

As of writing, there are five complaints raised against Babylon listed as ‘informally resolved’ with the Advertising Standards Authority (ASA) for potentially misleading claims. At least one, in correspondence seen by AI News, was that use of the chatbot is ’100% safe’ — a claim which has since been pulled from their advertising materials.

The study (PDF) behind the claim says it recruited 12 clinicians with at least four years of clinical experience — along with 17 nurses — to perform triage in a semi-naturalistic scenario. Actors performed mock consultations based on patient vignettes with both clinicians and nurses.

Here are the results:

Nurses — 73.5% accurate, 97% safe

Doctors — 77.5% accurate, 98% safe

Babylon Health — 90.2% accurate, 100% safe

All the sessions were timed and Babylon was fastest in 89 percent of cases; taking a median time of 01:09. On average, a consultation with a doctor took 03:12. A nurse took around 02:27.

The earlier video results in a failed diagnosis which can neither be classed as safe or unsafe advice. However, it doesn’t take even basic medical training to question the safety of the following advice:

Algorithms for all of the provided examples have since been updated, but you can see a collection of these problems here.

In MDDUS’ assessment, there are three key issues with remote consultations in general:

    • lack of prior knowledge of a patient
    • ensuring adequate consent
    • providing continuity of care

“Some systems for remote consulting use online forms based on algorithms in order to direct diagnostic questioning,” wrote the MDDUS. “Such systems present numerous inherent risks such as a lack of ‘relevant negatives’ – you only see what is on the form.”

“There is also increased potential for misunderstanding in regard to how the patient is interpreting questions, and barriers within the system to seek further clarification.”

Furthermore, the CQC (Care Quality Commission) recently criticised some online providers for failing to notify the patient’s regular GP when they issued prescriptions. This subsequently affects the ability for primary care services to have a full record of what medications a patient is using and how they responded.

Campaigning organisation Keep Our NHS Public also questioned the practices of some of these services.

In a poster (PDF), the campaigners claim GP at Hand “is using IT to hoover up NHS patients all round London, using NHS money.”

Even worse, they accuse it of ‘cherry-picking’ patients to reduce costs.

“GP at Hand seems to be deliberately targeting healthy young people. They won’t take you on if you’re pregnant, frail and elderly, or have a terminal illness. They don’t want patients with complex mental health problems, drug problems, dementia, a learning disability, or safeguarding needs. We think that’s because these patients are expensive.”

AI has a lot of potential in healthcare to improve efficiency, diagnosis, and treatment — but this serves as a reminder of how robust systems need to be for that potential to be reached. Otherwise, they can just be dangerous.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply