In his latest opinion piece, Shawview Consulting’s Brendan Shaw draws from classic sci-fi to reflect on the pros and cons of using artificial intelligence for health technology assessment. Shaw concludes that, despite the massive advances that AI represents, the human touch will always be needed in healthcare decision-making.
“I’m sorry Dave, I’m afraid I can’t do that”
HAL, 2001: A Space Odyssey
Could we replace health technology assessment (HTA) committees that evaluate medical technologies with artificial intelligence?
Could we replace existing HTA committees in a country with one computer system that gives the answer as to whether to fund a medicine, vaccine, device, or diagnostic?
It could tell you a medical technology’s clinical effectiveness, the cost-effectiveness ratio, a recommended price, and how much to spend, all at the push of a button. Heck, it could specify the clinical eligibility restrictions and even tell you by name which patients in a country should receive the technology.
The benefits of using AI and not humans in HTA
Just think for a minute about what might be possible if we replaced all the humans in the HTA system with a computer. No more interminable committee meetings with people meeting for days on end, pouring over reams and reams of reports, data and evaluations.
The computer could just hum for a few seconds and then spit out the answer.
‘Yes, fund this Covid vaccine.’
‘No, don’t fund this cancer medicine.’
‘Yes, fund this artificial hip, but only for Mrs Marsh who lives at 37 Smith Street, London.’
Just think of the improvements and efficiencies that would result.
We constantly hear how HTA systems are becoming more complicated, more convoluted, and more expensive to run. So why not replace them with a machine? If we can let AI systems drive robotaxis around cities and diagnose lung cancer, why not let them evaluate whether to treat humans with a medicine or a diagnostic?
You wouldn’t need layers of HTA bureaucracy. All those experts on evaluation committees and subcommittees spending weeks, months and years arguing over the finer points of HTA would be gone—no more HTA committee meetings.
No more drug lobbyists wandering the corridors of Congress or Parliament. What’s the point of lobbying if a computer that doesn’t face the voters or care about community anger is deciding what medicines or devices to fund in the health system?
No more activism nor advocacy would be needed from patient groups. Instead, they could focus primarily on patient services.
Health ministers and politicians could simply leave it to a computer to make the funding decision and have plausible deniability. ‘Sorry, people. The computer says ‘no’, so we can’t fund that cancer medicine.’
Instead, a country’s HTA system would all be an impartial, unemotional, and dispassionate evaluation of the evidence by a computer.
A bit like Deep Thought from Hitch Hiker’s Guide to the Galaxy or HAL from 2001: A Space Odyssey, the computer could decide. All the analysis and decision-making about what medical technologies to fund, how much to spend on them, and who would get them could all be left to an omnipotent all-knowing computer, unafflicted by emotion or resentment, immune from insults or lobbying, deaf to appeals to compassion. Moreover, the computer wouldn’t need to be paid for its time.
What’s not to like?
The costs of using AI and not humans in HTA
Well, quite a lot, actually.
It turns out, that human ethics, empathy, equity, values, ingenuity, problem-solving and flexibility all play a part in successfully getting medical technologies funded in today’s health systems.
Imagine having a drug price negotiation system where there was no room for flexibility or problem-solving, or one where a stupid administrative glitch just kept sending the issue back to the evaluation computer to get the same answer over and over again?
Imagine having computers or robots that cannot or will not intervene when the economics of vaccinating kids against deadly diseases or women against cervical cancer just didn’t make sense.
Imagine having no humans involved and instead having a robot that merely followed HTA guidelines to the letter when evaluating a diagnostic test. Imagine when a computer couldn’t deviate from those guidelines even when real-world circumstances dictated a nuanced, humane approach.
Thankfully, we don’t have systems like that.
Instead, we have humans making evidence-based decisions informed by human values like equity, compassion, pragmatism, aspiration, and egalitarianism. If we didn’t use values like this in making health care decisions, we may as well just use a computer to do it.
Societal perspectives, community values and human problem-solving matter in HTA. Otherwise, you end up in a situation portrayed in another science fiction movie, Logan’s Run. Logan’s Run depicts a futuristic society afflicted by ecological catastrophe where anyone over the age of 30 is automatically exterminated to help preserve resources for the younger generations, and because old people cost too much to look after (and, yes, that’s a science fiction movie, not a documentary about NHS funding ….).
More generally, as Professor Andrew Likierman from the London Business School has recently commented: “Human judgement is still irreplaceable despite advancements in AI, reinforcing the value of critical thinking and decision-making.”
As Likierman points out,
“Machines:
- Don’t have consciousness or intentionality
- Cannot think abstractly or form an opinion
- Are not good at identifying relevance through context (the appropriate comment in one situation or culture which is deeply offensive in another) and they don’t ‘do’ meaning (think metaphor, irony, or sense of humour)
- Don’t have belief nor conscience through ethics and spirituality, or self-belief through aspiration or ambition
- Don’t have emotion or empathy and can’t create relationships or other social bonds involving feeling
- Can’t anticipate spontaneity, idiosyncrasy, contextual shifts, or fallibility; and
- Cannot remedy incompleteness, including the confusion of correlation with causation.”
AI systems are great, but at least today they don’t have important human characteristics like emotions, pragmatism, ethics, irony or a sense of humour.
So, while AI certainly provides enormous opportunities in HTA, we’re probably still a long way off from getting to the point where we could leave the entire community decision making about which medical technologies to fund, who should get them, and how much society should spend on these things to an emotionless computer.
In his book, Deep Medicine, Eric Topol argues that the introduction of AI into health and medicine will take all the mundane and analytical tasks off human healthcare professionals and experts, freeing them up to do what they do best: providing complex, careful observation and psychological support to people by talking to patients.
There are significant opportunities from AI in HTA. As the National Institute for Health and Care Excellence (NICE) in the UK recently said in its new Guideline on AI: “AI methods can efficiently process and analyse large datasets to reveal patterns and relationships that may not be readily apparent to human analysts. And increasingly, generative AI can create novel outputs based upon what it learns from data.”
NICE also identified a number of issues in the use of AI in HTA, including concerns about the appropriateness, transparency and trustworthiness of AI-generated data and evidence (although, the same concerns have been raised about human-generated HTA data in the past …. ).
Ultimately, despite all its flaws – and there are many – it’s the human component of HTA that makes the system work for society.
At least, that’s what Chat GPT told me ….
Brendan Shaw is Principal of Shawview Consulting, and an Adjunct Professor at the Sydney Pharmacy School, Faculty of Medicine and Health, University of Sydney.