This briefing note was produced as a part of our project on Regulating algorithms in healthcare.
Machine learning for medicine has the potential to change clinical practice. The technology promises to make the diagnosis of various conditions both quicker and more accurate. This technological shift might also compel us to reconsider aspects of clinical and product liability. Machine learning is not infallible and so the question of who should be liable for any malfunction that results in inaccurate or delayed diagnosis is one we will soon have to answer.
‘Machine learning is a technology that allows computers to learn directly from examples and experience in the form of data... [M]achine learning systems are set a task, and given a large amount of data to use as examples of how this task can be achieved or from which to detect patterns... It can be thought of as narrow AI: machine learning supports intelligent systems, which are able to learn a particular function, given a specific set of data to learn from’1.
With the implementation of machine learning into medicine, it is time to examine whether clinicians that have taken due care should be liable if an algorithm causes damage or loss to their patient. The expansion of machine learning in medicine could also exacerbate old ambiguities in product liability, leaving those that have suffered loss without any robust way to recover damages.
While machine learning may change the practice of medicine for the better, one worry is that the introduction of such technology will foster the growth of ‘black-box medicine’2 – i.e. ‘the use of opaque computational models to make decisions related to health care.’ These models are opaque, not by design, but often by necessity because of the amount and complexity of the data used3. Moreover, techniques such as machine learning do not easily lend themselves to human concepts of explanation and significance. Machine learning outputs are typically probabilistic and sometimes inscrutable.
While trained pathologists will be able to explain why they have identified some parts of a scan as of clinical concern, it may be unclear why a machine learning image-analysis algorithm has assigned significance to a particular finding/image.
Black box negligence
Unsurprisingly, clinicians are the primary defendants in actions for clinical negligence. If a patient is harmed by a faulty diagnosis, the most obvious response is to sue the clinician, or most probably their employer. This remains true even where software may have contributed to this faulty diagnosis. This is because in many jurisdictions, the case law has developed in relation to software used to support rather than make clinical decisions.
Where clinical decision support software is used to detect drug contraindications (e.g interactions between potential medications)4 if the software fails to detect a interaction, the clinician remains at fault in clinical negligence for any resulting and foreseeable injury, despite the apparent flaw in the software. This is on the basis that ultimately, the decision to prescribe still remains with the clinician and so the clinician is at fault.
With the new wave of machine learning for medicine we must reconsider the current approach to clinical liability. Near-use machine learning applications (e.g. image analysis algorithms) for medicine will assist, not replace clinicians. However, future machine learning software is likely to go beyond current applications in being more sophisticated than their manually programmed counterparts.
Risk prediction tools are likely to utilise machine learning in the future. To predict risk, they will process large volumes of data to generate predictions. These predictions may be highly accurate, but the precise reason why one patient is deemed to be at higher risk than another may remain hidden. The uptake of inscrutable algorithms should make us question whether clinicians that have interpreted machine learning outputs with due care should be held liable for improper diagnosis that stems from machine learning malfunction.
Typically the law lets ‘loss lie where it falls’ , meaning the claimant goes uncompensated unless it can be shown that the defendant was ‘at fault’, meaning they acted unreasonably. However, there are some special situations, such as product liability, where legislation stipulates that a defendant can be liable to compensate another person’s loss even if they were not at fault. This is a form of strict liability. Product liability is not concerned with a defendant being held liable for doing something wrong, instead it is concerned with holding a defendant liable because something has gone wrong5.
This form of liability is entrenched in the Consumer Protection Act 1987 (CPA) and its associated Directive. Defendants who put ‘defective’ products into circulation can be liable for the damage they cause. This gives those injured by defective products another way to recover damages other than suing for breach of contract or negligence6. The advantage is that the claimant does not have to prove that the manufacturer was at fault for the damage (as in negligence) or that they had a contract. Consequently, product liability provides an extra avenue for those injured by products to sue manufacturers and sellers.
Machine learning is progamming technique that may be incorporated into software. For a claim in product liability, there must be a product. ‘Product’ and the closely-related term ‘goods’ are given various definitions in legislation. ‘Product’ can be defined as ‘any good or electricity’ or as any ‘moveable’7. ‘Goods’ is defined as ‘including all substances’8. While all of these definitions are vague, both the CPA and the Directive point to a definition of ‘product’ that primarily includes physical items. Given this, software under the CPA may echo the interpretation of software under the Sale of Goods Act 1979, where software does not count as ‘goods’ but the disk containing software does. This is an awkward situation, out of step with increasing digitisation and the advent of cloud computing.
Broadly, there are two ways software might constitute a product. Uncontroversially, if software is a component in a wider physical product, there will be a good claim against this composite product. For example, if software incorporated by the manufacturer into a blood glucose monitor malfunctions, the manufacturer of the composite product will be liable for that malfunction. Controversially, standalone software - software not incorporated into a wider product - might count as a product, but it is contentious whether software itself counts as a product for the purposes of product liability9.
The legal uncertainty over whether machine learning software satisfies the definition of a product is not unique. However, three features might exacerbate this legal uncertainty when using machine learning for health:
The combination of these three elements means that claims in product liability might be increasingly necessary to bring a successful case, but inconsistently actionable, despite pressure from claimants who have suffered significant harm.
Machine learning may challenge the existing legal position where clinicians are liable for software malfunctions that contribute to improper diagnosis. Moreover, the uptake of machine learning suggests that we should revisit whether manufacturers and sellers of software should be strictly liable under product liability law.
How we deal with these questions of legal liability will affect the cost and spread of the technology in the health sector. Moreover, any scheme of liability will incentivise or disincentivise certain behaviour – ultimately shaping the practice of medicine.
We would like to thank Dr Kathy Liddell and Dr Jeffrey Skopek of The Cambridge Centre for Law, Medicine and Life Sciences for their work on this briefing note.