I'm interested in…

  • Strategy & Procedure
  • Catastrophic Injury
  • Professional Indemnity
  • Motor
  • Fraud
  • Disease
  • Liability
  • Commercial Insurance
  • Costs
  • Local Authority
  • Scotland

In Conversation With…Robert Kellar QC and Edite Ligere, barristers at 1 Crown Office Row Chambers

We are very grateful to Robert Kellar QC and Edite Ligere for sharing their expert observations on Artificial Intelligence in healthcare.  AI is already in use in diagnostics today, and its importance in healthcare seems set to increase exponentially.  But unlike most other developments in healthcare technology, AI has the potential to fundamentally alter the nature of the medical profession, clinical negligence claims, and medical indemnity insurance. 

Robert is a leading practitioner in clinical negligence, with particular experience in complex, multi-party and high value litigation. Edite's practice focuses on global financial regulation, banking, insurance, human rights, consumer protection, charity law, data protection, machine learning, artificial intelligence and cyber security. Together they are ideally placed to highlight the unique challenges that are facing the UK private healthcare sector, and its insurers. 

Robert and Edite will be joined by Consultant Oncologist Nick Plowman to discuss these issues in more detail at DWF's breakfast seminar event on Thursday 5 March 2020, "Artificial Intelligence, Real Headache?"  If you would like an invitation, please contact Joanne Staphnill, Partner, DWF Law LLP.

Technology has been used in healthcare throughout its history – devices ranging from stethoscopes to heart rate monitors are ubiquitous and unremarkable. So why is healthcare technology such an important topic right now?

Healthcare technology has become a hot topic in recent years because of the emergence of potentially game-changing innovations. Previous advancements have generally come in the form of tools that doctors and nurses can use. In contrast, modern technological developments are making it possible for medical staff to be replaced, in some circumstances, by robotics or artificial intelligence systems. For example, in the context of diagnostics, there are already technologies that analyse medical images, such as checking radiological images for tumours, with the same accuracy as a human expert. Similarly, machine-learning early warning systems that alert clinicians to patients at risk of deterioration can already perform better than current clinical outcomes. A shift from a model where clinicians uses technology as tools, to one in which clinicians are to some extent replaced has profound implications for how medical care is regulated and where legal liability falls when something goes wrong.

Why is Artificial Intelligence such an important development in healthcare? What AI developments do you think will have the greatest impact on healthcare services?

The use of Artificial Intelligence in the healthcare context has great promise, particularly in the field of diagnostics. We already have emergent diagnostic AI tools that are performing at the same level as human clinicians. But AI has the potential to make diagnostic predictions in a way no human could. Machine learning is a technology that allows computers to learn directly from examples and experience in the form of data. One application of machine learning in the healthcare context is to provide a computer programme with a large set of patient data, where the health outcomes for those patients is already known. The programme then identifies patterns in that data, developing a complex understanding of the relationship between certain factors and health outcomes for patients. When that programme is provided with data about a new patient, whose diagnosis and/or prognosis is unknown, it uses what it has “learned” to make a prediction. As computers are able to process ever more vast data sets, AI tools have the potential to make diagnostic and prognostic predictions that are more accurate than those made by clinicians. Similarly, machine learning can be used to devise treatment plans for patients. And this technology isn’t years away, it’s being developed and used now.

 

What about those left behind – doctors and hospitals who cannot afford to acquire the very latest healthcare tech. At what point does a piece of healthcare tech become essential to discharge your clinical duty of care?

Another way in which a doctor or healthcare institution could be exposed to liability is if they fail to use AI systems when doing so is required to discharge their duty of care. How will the Bolam/Bolitho test be adapted to accommodate different rates of adoption of emergent technology? At what stage will it be become Bolam irresponsible or unreasonable for “late adopters” to decline to use emerging technologies? Given the perceived benefits of AI technology at what point would refusal to adopt it, or to follow its guidance, become Bolitho illogical? To what extent can healthcare institution rely upon cost or resource arguments to avoid investing in healthcare technology? Is this a matter of clinical judgment to which the Bolam test applies or is there a different test? Are decisions about resourcing matters for the Courts at all?

What about problems with bias in AI systems?

One critical weakness in machine learning is that it will replicate, and sometimes magnify, existing biases which are present in the data sets on which it is trained. This is of particular importance in the medical context, given known discrepancies in health outcomes along gender and racial lines. For example, clinical trials often fail to include enough women and do not adequately investigate how a drug might affect women’s bodies differently. Women’s self-reported pain is more likely to be questioned by doctors and pain conditions in women are frequently misdiagnosed.

AI can be both a positive and a negative in this context. On the one hand, a carefully designed programme might provide more accurate clinical predictions and go some way to overcoming potential biases in human decision-makers. On the other, an AI tool trained on a data set that is skewed along gender lines will replicate this imbalance. It is plausible that an AI diagnostic tool might be more accurate than a human clinician overall, but distribute those benefits unevenly, with minimal improvement in detecting a condition in women but a large improvement for men.

Where a woman suffers harm as a result of negligent treatment by a clinician, who was un-assisted by AI, she can bring a civil claim. If women are more likely to suffer negligent treatment, then we can expect this to be reflected in a larger number of successful negligence claims brought by women. Where an AI system has been adopted by a hospital for use, things will be more complex. As discussed above, a doctor might not be negligent for relying on a complex algorithm that, overall, improves outcomes. However, a hospital who chooses to use an AI system which distributes healthcare gains unevenly could arguably be in breach of its duty of care. Traditionally, negligence law has not concerned itself with questions of discrimination. Perhaps these issues will now arise, given the practical possibility of testing AI systems for bias. More plausibly, this could be addressed by a new bespoke legal regime, more suited to these emergent technologies.

This information is intended as a general discussion surrounding the topics covered and is for guidance purposes only. It does not constitute legal advice and should not be regarded as a substitute for taking legal advice. DWF is not responsible for any activity undertaken based on this information.

Top